Feb 13 15:08:15.225361 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 15:08:15.225408 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 13:51:50 -00 2025 Feb 13 15:08:15.225435 kernel: KASLR disabled due to lack of seed Feb 13 15:08:15.225451 kernel: efi: EFI v2.7 by EDK II Feb 13 15:08:15.225467 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Feb 13 15:08:15.225482 kernel: secureboot: Secure boot disabled Feb 13 15:08:15.225500 kernel: ACPI: Early table checksum verification disabled Feb 13 15:08:15.225516 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 15:08:15.225531 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 15:08:15.225547 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 15:08:15.225567 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 15:08:15.225583 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 15:08:15.225598 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 15:08:15.225614 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 15:08:15.225632 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 15:08:15.225653 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 15:08:15.225670 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 15:08:15.225687 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 15:08:15.225703 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 15:08:15.225720 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 15:08:15.225737 kernel: printk: bootconsole [uart0] enabled Feb 13 15:08:15.225753 kernel: NUMA: Failed to initialise from firmware Feb 13 15:08:15.228702 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 15:08:15.228741 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 15:08:15.228758 kernel: Zone ranges: Feb 13 15:08:15.228801 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 15:08:15.228827 kernel: DMA32 empty Feb 13 15:08:15.228845 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 15:08:15.228861 kernel: Movable zone start for each node Feb 13 15:08:15.228877 kernel: Early memory node ranges Feb 13 15:08:15.228893 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 15:08:15.228910 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 15:08:15.228926 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 15:08:15.228943 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 15:08:15.228959 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 15:08:15.228976 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 15:08:15.228992 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 15:08:15.229009 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 15:08:15.229030 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 15:08:15.229047 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 15:08:15.229071 kernel: psci: probing for conduit method from ACPI. Feb 13 15:08:15.229088 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 15:08:15.229106 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:08:15.229127 kernel: psci: Trusted OS migration not required Feb 13 15:08:15.229145 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:08:15.229163 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:08:15.229180 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:08:15.229198 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:08:15.229215 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:08:15.229233 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:08:15.229250 kernel: CPU features: detected: Spectre-v2 Feb 13 15:08:15.229267 kernel: CPU features: detected: Spectre-v3a Feb 13 15:08:15.229284 kernel: CPU features: detected: Spectre-BHB Feb 13 15:08:15.229301 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 15:08:15.229319 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 15:08:15.229341 kernel: alternatives: applying boot alternatives Feb 13 15:08:15.229361 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:08:15.229380 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:08:15.229398 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:08:15.229415 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:08:15.229432 kernel: Fallback order for Node 0: 0 Feb 13 15:08:15.229449 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 15:08:15.229467 kernel: Policy zone: Normal Feb 13 15:08:15.229484 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:08:15.229501 kernel: software IO TLB: area num 2. Feb 13 15:08:15.229523 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 15:08:15.229541 kernel: Memory: 3821240K/4030464K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 209224K reserved, 0K cma-reserved) Feb 13 15:08:15.229558 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:08:15.229576 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:08:15.229594 kernel: rcu: RCU event tracing is enabled. Feb 13 15:08:15.229612 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:08:15.229630 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:08:15.229647 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:08:15.229665 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:08:15.229683 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:08:15.229700 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:08:15.229722 kernel: GICv3: 96 SPIs implemented Feb 13 15:08:15.229740 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:08:15.229756 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:08:15.229886 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 15:08:15.229906 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 15:08:15.229923 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 15:08:15.229941 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:08:15.229958 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:08:15.229975 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 15:08:15.229992 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 15:08:15.230010 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 15:08:15.230027 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:08:15.230051 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 15:08:15.230069 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 15:08:15.230086 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 15:08:15.230104 kernel: Console: colour dummy device 80x25 Feb 13 15:08:15.230122 kernel: printk: console [tty1] enabled Feb 13 15:08:15.230141 kernel: ACPI: Core revision 20230628 Feb 13 15:08:15.230159 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 15:08:15.230177 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:08:15.230195 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:08:15.230218 kernel: landlock: Up and running. Feb 13 15:08:15.230236 kernel: SELinux: Initializing. Feb 13 15:08:15.230254 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:08:15.230271 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:08:15.230289 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:08:15.230307 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:08:15.230325 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:08:15.230343 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:08:15.230361 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 15:08:15.230383 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 15:08:15.230401 kernel: Remapping and enabling EFI services. Feb 13 15:08:15.230418 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:08:15.230435 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:08:15.230454 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 15:08:15.230472 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 15:08:15.230489 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 15:08:15.230508 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:08:15.230525 kernel: SMP: Total of 2 processors activated. Feb 13 15:08:15.230547 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:08:15.230565 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 15:08:15.230583 kernel: CPU features: detected: CRC32 instructions Feb 13 15:08:15.230613 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:08:15.230636 kernel: alternatives: applying system-wide alternatives Feb 13 15:08:15.230654 kernel: devtmpfs: initialized Feb 13 15:08:15.230672 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:08:15.230690 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:08:15.230709 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:08:15.230727 kernel: SMBIOS 3.0.0 present. Feb 13 15:08:15.230750 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 15:08:15.230789 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:08:15.230839 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:08:15.230859 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:08:15.230878 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:08:15.230896 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:08:15.230915 kernel: audit: type=2000 audit(0.253:1): state=initialized audit_enabled=0 res=1 Feb 13 15:08:15.230941 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:08:15.230959 kernel: cpuidle: using governor menu Feb 13 15:08:15.230977 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:08:15.230996 kernel: ASID allocator initialised with 65536 entries Feb 13 15:08:15.231014 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:08:15.231032 kernel: Serial: AMBA PL011 UART driver Feb 13 15:08:15.231051 kernel: Modules: 17760 pages in range for non-PLT usage Feb 13 15:08:15.231070 kernel: Modules: 509280 pages in range for PLT usage Feb 13 15:08:15.231088 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:08:15.231112 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:08:15.231130 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:08:15.231149 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:08:15.231167 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:08:15.231186 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:08:15.231205 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:08:15.231223 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:08:15.231242 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:08:15.231260 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:08:15.231284 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:08:15.231303 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:08:15.231321 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:08:15.231340 kernel: ACPI: Interpreter enabled Feb 13 15:08:15.231358 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:08:15.231378 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:08:15.231396 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 15:08:15.231743 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:08:15.232022 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:08:15.232255 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:08:15.232465 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 15:08:15.232676 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 15:08:15.232702 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 15:08:15.232721 kernel: acpiphp: Slot [1] registered Feb 13 15:08:15.232740 kernel: acpiphp: Slot [2] registered Feb 13 15:08:15.232758 kernel: acpiphp: Slot [3] registered Feb 13 15:08:15.232803 kernel: acpiphp: Slot [4] registered Feb 13 15:08:15.232823 kernel: acpiphp: Slot [5] registered Feb 13 15:08:15.232841 kernel: acpiphp: Slot [6] registered Feb 13 15:08:15.232860 kernel: acpiphp: Slot [7] registered Feb 13 15:08:15.232878 kernel: acpiphp: Slot [8] registered Feb 13 15:08:15.232896 kernel: acpiphp: Slot [9] registered Feb 13 15:08:15.232914 kernel: acpiphp: Slot [10] registered Feb 13 15:08:15.232932 kernel: acpiphp: Slot [11] registered Feb 13 15:08:15.232951 kernel: acpiphp: Slot [12] registered Feb 13 15:08:15.232969 kernel: acpiphp: Slot [13] registered Feb 13 15:08:15.232992 kernel: acpiphp: Slot [14] registered Feb 13 15:08:15.233010 kernel: acpiphp: Slot [15] registered Feb 13 15:08:15.233028 kernel: acpiphp: Slot [16] registered Feb 13 15:08:15.233046 kernel: acpiphp: Slot [17] registered Feb 13 15:08:15.233064 kernel: acpiphp: Slot [18] registered Feb 13 15:08:15.233082 kernel: acpiphp: Slot [19] registered Feb 13 15:08:15.233100 kernel: acpiphp: Slot [20] registered Feb 13 15:08:15.233119 kernel: acpiphp: Slot [21] registered Feb 13 15:08:15.233137 kernel: acpiphp: Slot [22] registered Feb 13 15:08:15.233160 kernel: acpiphp: Slot [23] registered Feb 13 15:08:15.233179 kernel: acpiphp: Slot [24] registered Feb 13 15:08:15.233197 kernel: acpiphp: Slot [25] registered Feb 13 15:08:15.233215 kernel: acpiphp: Slot [26] registered Feb 13 15:08:15.233233 kernel: acpiphp: Slot [27] registered Feb 13 15:08:15.233251 kernel: acpiphp: Slot [28] registered Feb 13 15:08:15.233269 kernel: acpiphp: Slot [29] registered Feb 13 15:08:15.233287 kernel: acpiphp: Slot [30] registered Feb 13 15:08:15.233332 kernel: acpiphp: Slot [31] registered Feb 13 15:08:15.233353 kernel: PCI host bridge to bus 0000:00 Feb 13 15:08:15.233582 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 15:08:15.235833 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:08:15.238172 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 15:08:15.238383 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 15:08:15.238631 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 15:08:15.238936 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 15:08:15.239171 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 15:08:15.241140 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 15:08:15.241380 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 15:08:15.241590 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 15:08:15.241862 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 15:08:15.242095 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 15:08:15.242322 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 15:08:15.242531 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 15:08:15.242740 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 15:08:15.244424 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 15:08:15.244684 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 15:08:15.244949 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 15:08:15.245173 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 15:08:15.245423 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 15:08:15.245651 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 15:08:15.248975 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:08:15.249193 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 15:08:15.249221 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:08:15.249241 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:08:15.249260 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:08:15.249279 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:08:15.249298 kernel: iommu: Default domain type: Translated Feb 13 15:08:15.249327 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:08:15.249347 kernel: efivars: Registered efivars operations Feb 13 15:08:15.249366 kernel: vgaarb: loaded Feb 13 15:08:15.249385 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:08:15.249404 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:08:15.249423 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:08:15.249442 kernel: pnp: PnP ACPI init Feb 13 15:08:15.249676 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 15:08:15.249714 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:08:15.249734 kernel: NET: Registered PF_INET protocol family Feb 13 15:08:15.249754 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:08:15.249806 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:08:15.249829 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:08:15.249849 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:08:15.249869 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:08:15.249890 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:08:15.249910 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:08:15.249938 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:08:15.249958 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:08:15.249978 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:08:15.249998 kernel: kvm [1]: HYP mode not available Feb 13 15:08:15.250018 kernel: Initialise system trusted keyrings Feb 13 15:08:15.250039 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:08:15.250059 kernel: Key type asymmetric registered Feb 13 15:08:15.250078 kernel: Asymmetric key parser 'x509' registered Feb 13 15:08:15.250098 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:08:15.250123 kernel: io scheduler mq-deadline registered Feb 13 15:08:15.250143 kernel: io scheduler kyber registered Feb 13 15:08:15.250162 kernel: io scheduler bfq registered Feb 13 15:08:15.250460 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 15:08:15.250497 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:08:15.250517 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:08:15.250537 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 15:08:15.250557 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 15:08:15.250585 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:08:15.250607 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 15:08:15.251968 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 15:08:15.252009 kernel: printk: console [ttyS0] disabled Feb 13 15:08:15.252029 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 15:08:15.252048 kernel: printk: console [ttyS0] enabled Feb 13 15:08:15.252067 kernel: printk: bootconsole [uart0] disabled Feb 13 15:08:15.252085 kernel: thunder_xcv, ver 1.0 Feb 13 15:08:15.252103 kernel: thunder_bgx, ver 1.0 Feb 13 15:08:15.252151 kernel: nicpf, ver 1.0 Feb 13 15:08:15.252173 kernel: nicvf, ver 1.0 Feb 13 15:08:15.252431 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:08:15.252626 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:08:14 UTC (1739459294) Feb 13 15:08:15.252652 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:08:15.252671 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 15:08:15.252690 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:08:15.252708 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:08:15.252733 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:08:15.252751 kernel: Segment Routing with IPv6 Feb 13 15:08:15.252792 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:08:15.252915 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:08:15.252941 kernel: Key type dns_resolver registered Feb 13 15:08:15.252960 kernel: registered taskstats version 1 Feb 13 15:08:15.252979 kernel: Loading compiled-in X.509 certificates Feb 13 15:08:15.252999 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 03c2ececc548f4ae45f50171451f5c036e2757d4' Feb 13 15:08:15.253018 kernel: Key type .fscrypt registered Feb 13 15:08:15.253044 kernel: Key type fscrypt-provisioning registered Feb 13 15:08:15.253062 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:08:15.253080 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:08:15.253099 kernel: ima: No architecture policies found Feb 13 15:08:15.253117 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:08:15.253135 kernel: clk: Disabling unused clocks Feb 13 15:08:15.253153 kernel: Freeing unused kernel memory: 38336K Feb 13 15:08:15.253171 kernel: Run /init as init process Feb 13 15:08:15.253189 kernel: with arguments: Feb 13 15:08:15.253207 kernel: /init Feb 13 15:08:15.253230 kernel: with environment: Feb 13 15:08:15.253248 kernel: HOME=/ Feb 13 15:08:15.253266 kernel: TERM=linux Feb 13 15:08:15.253284 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:08:15.253305 systemd[1]: Successfully made /usr/ read-only. Feb 13 15:08:15.253330 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:08:15.253351 systemd[1]: Detected virtualization amazon. Feb 13 15:08:15.253376 systemd[1]: Detected architecture arm64. Feb 13 15:08:15.253396 systemd[1]: Running in initrd. Feb 13 15:08:15.253416 systemd[1]: No hostname configured, using default hostname. Feb 13 15:08:15.253438 systemd[1]: Hostname set to . Feb 13 15:08:15.253458 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:08:15.253478 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:08:15.253498 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:08:15.253519 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:08:15.253545 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:08:15.253566 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:08:15.253586 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:08:15.253607 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:08:15.253629 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:08:15.253650 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:08:15.253670 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:08:15.253694 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:08:15.253714 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:08:15.253734 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:08:15.253754 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:08:15.253859 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:08:15.253882 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:08:15.253903 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:08:15.253923 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:08:15.253943 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 15:08:15.253969 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:08:15.253989 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:08:15.254009 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:08:15.254028 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:08:15.254048 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:08:15.254069 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:08:15.254089 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:08:15.254109 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:08:15.254133 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:08:15.254153 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:08:15.254173 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:08:15.254193 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:08:15.254213 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:08:15.254234 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:08:15.254307 systemd-journald[252]: Collecting audit messages is disabled. Feb 13 15:08:15.254350 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:08:15.254371 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:08:15.254396 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:08:15.254417 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:08:15.254437 systemd-journald[252]: Journal started Feb 13 15:08:15.254474 systemd-journald[252]: Runtime Journal (/run/log/journal/ec25bc03f574cd2cb76564b1bdcd1fee) is 8M, max 75.3M, 67.3M free. Feb 13 15:08:15.217279 systemd-modules-load[253]: Inserted module 'overlay' Feb 13 15:08:15.258809 kernel: Bridge firewalling registered Feb 13 15:08:15.259833 systemd-modules-load[253]: Inserted module 'br_netfilter' Feb 13 15:08:15.270037 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:08:15.271364 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:08:15.282543 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:08:15.292096 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:08:15.294045 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:08:15.300415 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:08:15.340351 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:08:15.344026 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:08:15.355983 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:08:15.372328 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:08:15.387698 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:08:15.401301 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:08:15.415911 dracut-cmdline[287]: dracut-dracut-053 Feb 13 15:08:15.425497 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:08:15.493501 systemd-resolved[293]: Positive Trust Anchors: Feb 13 15:08:15.493548 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:08:15.493613 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:08:15.586802 kernel: SCSI subsystem initialized Feb 13 15:08:15.593807 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:08:15.606810 kernel: iscsi: registered transport (tcp) Feb 13 15:08:15.629807 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:08:15.629879 kernel: QLogic iSCSI HBA Driver Feb 13 15:08:15.716143 kernel: random: crng init done Feb 13 15:08:15.716225 systemd-resolved[293]: Defaulting to hostname 'linux'. Feb 13 15:08:15.719600 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:08:15.723596 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:08:15.749854 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:08:15.759251 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:08:15.802426 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:08:15.802515 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:08:15.804280 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:08:15.870830 kernel: raid6: neonx8 gen() 6625 MB/s Feb 13 15:08:15.887801 kernel: raid6: neonx4 gen() 6591 MB/s Feb 13 15:08:15.904801 kernel: raid6: neonx2 gen() 5469 MB/s Feb 13 15:08:15.921803 kernel: raid6: neonx1 gen() 3969 MB/s Feb 13 15:08:15.938799 kernel: raid6: int64x8 gen() 3638 MB/s Feb 13 15:08:15.955804 kernel: raid6: int64x4 gen() 3719 MB/s Feb 13 15:08:15.972801 kernel: raid6: int64x2 gen() 3613 MB/s Feb 13 15:08:15.990544 kernel: raid6: int64x1 gen() 2774 MB/s Feb 13 15:08:15.990582 kernel: raid6: using algorithm neonx8 gen() 6625 MB/s Feb 13 15:08:16.008539 kernel: raid6: .... xor() 4742 MB/s, rmw enabled Feb 13 15:08:16.008585 kernel: raid6: using neon recovery algorithm Feb 13 15:08:16.015803 kernel: xor: measuring software checksum speed Feb 13 15:08:16.016799 kernel: 8regs : 11915 MB/sec Feb 13 15:08:16.018937 kernel: 32regs : 12002 MB/sec Feb 13 15:08:16.018970 kernel: arm64_neon : 9572 MB/sec Feb 13 15:08:16.018995 kernel: xor: using function: 32regs (12002 MB/sec) Feb 13 15:08:16.104864 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:08:16.127160 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:08:16.136089 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:08:16.185263 systemd-udevd[473]: Using default interface naming scheme 'v255'. Feb 13 15:08:16.195760 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:08:16.218161 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:08:16.249082 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Feb 13 15:08:16.304223 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:08:16.315067 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:08:16.444687 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:08:16.457026 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:08:16.509074 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:08:16.515325 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:08:16.518296 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:08:16.520938 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:08:16.534046 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:08:16.573580 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:08:16.652963 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:08:16.653069 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 15:08:16.684722 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 15:08:16.685063 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 15:08:16.685309 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:80:6b:a6:f6:97 Feb 13 15:08:16.664685 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:08:16.664983 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:08:16.667950 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:08:16.670159 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:08:16.670485 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:08:16.673323 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:08:16.687238 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:08:16.692691 (udev-worker)[530]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:08:16.702669 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:08:16.736814 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 15:08:16.739893 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 15:08:16.747480 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:08:16.764401 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 15:08:16.764739 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:08:16.764797 kernel: GPT:9289727 != 16777215 Feb 13 15:08:16.764827 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:08:16.764854 kernel: GPT:9289727 != 16777215 Feb 13 15:08:16.764878 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:08:16.764912 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:08:16.769522 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:08:16.807909 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:08:16.900442 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (520) Feb 13 15:08:16.958819 kernel: BTRFS: device fsid b3d3c5e7-c505-4391-bb7a-de2a572c0855 devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (529) Feb 13 15:08:17.015020 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 15:08:17.045528 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 15:08:17.102836 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:08:17.124277 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 15:08:17.129868 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 15:08:17.148192 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:08:17.163037 disk-uuid[663]: Primary Header is updated. Feb 13 15:08:17.163037 disk-uuid[663]: Secondary Entries is updated. Feb 13 15:08:17.163037 disk-uuid[663]: Secondary Header is updated. Feb 13 15:08:17.172829 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:08:17.185818 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:08:18.194793 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:08:18.195999 disk-uuid[664]: The operation has completed successfully. Feb 13 15:08:18.413066 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:08:18.413271 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:08:18.509025 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:08:18.515835 sh[922]: Success Feb 13 15:08:18.545834 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:08:18.669198 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:08:18.689027 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:08:18.693683 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:08:18.735570 kernel: BTRFS info (device dm-0): first mount of filesystem b3d3c5e7-c505-4391-bb7a-de2a572c0855 Feb 13 15:08:18.735716 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:08:18.736837 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:08:18.739046 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:08:18.739120 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:08:18.854815 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:08:18.885162 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:08:18.889131 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:08:18.903094 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:08:18.911073 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:08:18.949156 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:08:18.949255 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:08:18.950641 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:08:18.958191 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:08:18.981643 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:08:18.985654 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:08:19.006912 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:08:19.019070 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:08:19.104091 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:08:19.120037 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:08:19.183103 systemd-networkd[1115]: lo: Link UP Feb 13 15:08:19.183126 systemd-networkd[1115]: lo: Gained carrier Feb 13 15:08:19.188579 systemd-networkd[1115]: Enumeration completed Feb 13 15:08:19.189279 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:08:19.194915 systemd[1]: Reached target network.target - Network. Feb 13 15:08:19.198458 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:08:19.198467 systemd-networkd[1115]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:08:19.206998 systemd-networkd[1115]: eth0: Link UP Feb 13 15:08:19.207018 systemd-networkd[1115]: eth0: Gained carrier Feb 13 15:08:19.207036 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:08:19.222902 systemd-networkd[1115]: eth0: DHCPv4 address 172.31.30.142/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:08:19.466802 ignition[1046]: Ignition 2.20.0 Feb 13 15:08:19.466831 ignition[1046]: Stage: fetch-offline Feb 13 15:08:19.467268 ignition[1046]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:08:19.467294 ignition[1046]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:08:19.472816 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:08:19.468427 ignition[1046]: Ignition finished successfully Feb 13 15:08:19.489584 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:08:19.513722 ignition[1127]: Ignition 2.20.0 Feb 13 15:08:19.513745 ignition[1127]: Stage: fetch Feb 13 15:08:19.514533 ignition[1127]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:08:19.514558 ignition[1127]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:08:19.514731 ignition[1127]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:08:19.525407 ignition[1127]: PUT result: OK Feb 13 15:08:19.529017 ignition[1127]: parsed url from cmdline: "" Feb 13 15:08:19.529052 ignition[1127]: no config URL provided Feb 13 15:08:19.529068 ignition[1127]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:08:19.529097 ignition[1127]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:08:19.529135 ignition[1127]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:08:19.532756 ignition[1127]: PUT result: OK Feb 13 15:08:19.533035 ignition[1127]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 15:08:19.540387 ignition[1127]: GET result: OK Feb 13 15:08:19.540546 ignition[1127]: parsing config with SHA512: 207559342cefa378dd528615b09fec4455d70899fd19f2172a28cc892b277113f384d6e8ff81192b6879d7e8958052695642d0ad8271096249006482b4146af0 Feb 13 15:08:19.552056 unknown[1127]: fetched base config from "system" Feb 13 15:08:19.552088 unknown[1127]: fetched base config from "system" Feb 13 15:08:19.553366 ignition[1127]: fetch: fetch complete Feb 13 15:08:19.552102 unknown[1127]: fetched user config from "aws" Feb 13 15:08:19.553378 ignition[1127]: fetch: fetch passed Feb 13 15:08:19.557985 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:08:19.553465 ignition[1127]: Ignition finished successfully Feb 13 15:08:19.579214 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:08:19.605058 ignition[1133]: Ignition 2.20.0 Feb 13 15:08:19.605087 ignition[1133]: Stage: kargs Feb 13 15:08:19.606095 ignition[1133]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:08:19.606139 ignition[1133]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:08:19.606290 ignition[1133]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:08:19.608193 ignition[1133]: PUT result: OK Feb 13 15:08:19.627569 ignition[1133]: kargs: kargs passed Feb 13 15:08:19.627936 ignition[1133]: Ignition finished successfully Feb 13 15:08:19.633556 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:08:19.642060 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:08:19.683588 ignition[1139]: Ignition 2.20.0 Feb 13 15:08:19.683620 ignition[1139]: Stage: disks Feb 13 15:08:19.684758 ignition[1139]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:08:19.685161 ignition[1139]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:08:19.685367 ignition[1139]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:08:19.692190 ignition[1139]: PUT result: OK Feb 13 15:08:19.707610 ignition[1139]: disks: disks passed Feb 13 15:08:19.707752 ignition[1139]: Ignition finished successfully Feb 13 15:08:19.712849 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:08:19.717187 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:08:19.721497 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:08:19.723911 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:08:19.725907 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:08:19.727897 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:08:19.744537 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:08:19.799336 systemd-fsck[1147]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:08:19.809977 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:08:19.819020 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:08:19.911809 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f78dcc36-7881-4d16-ad8b-28e23dfbdad0 r/w with ordered data mode. Quota mode: none. Feb 13 15:08:19.913740 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:08:19.917358 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:08:19.937974 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:08:19.945041 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:08:19.947436 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:08:19.947543 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:08:19.947601 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:08:19.969264 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1166) Feb 13 15:08:19.969330 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:08:19.972173 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:08:19.972258 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:08:19.975403 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:08:19.981853 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:08:19.986075 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:08:19.995320 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:08:20.473841 initrd-setup-root[1190]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:08:20.495867 initrd-setup-root[1197]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:08:20.504699 initrd-setup-root[1204]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:08:20.511806 initrd-setup-root[1211]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:08:20.815243 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:08:20.824972 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:08:20.834964 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:08:20.850153 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:08:20.852353 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:08:20.891919 ignition[1279]: INFO : Ignition 2.20.0 Feb 13 15:08:20.891919 ignition[1279]: INFO : Stage: mount Feb 13 15:08:20.896020 ignition[1279]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:08:20.896020 ignition[1279]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:08:20.901026 ignition[1279]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:08:20.901026 ignition[1279]: INFO : PUT result: OK Feb 13 15:08:20.902409 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:08:20.907872 systemd-networkd[1115]: eth0: Gained IPv6LL Feb 13 15:08:20.915729 ignition[1279]: INFO : mount: mount passed Feb 13 15:08:20.917560 ignition[1279]: INFO : Ignition finished successfully Feb 13 15:08:20.921218 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:08:20.927984 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:08:20.959239 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:08:20.978818 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1291) Feb 13 15:08:20.982937 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:08:20.982987 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:08:20.983012 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:08:20.988806 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:08:20.991844 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:08:21.036813 ignition[1308]: INFO : Ignition 2.20.0 Feb 13 15:08:21.036813 ignition[1308]: INFO : Stage: files Feb 13 15:08:21.040036 ignition[1308]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:08:21.040036 ignition[1308]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:08:21.040036 ignition[1308]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:08:21.046932 ignition[1308]: INFO : PUT result: OK Feb 13 15:08:21.051046 ignition[1308]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:08:21.053857 ignition[1308]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:08:21.053857 ignition[1308]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:08:21.091797 ignition[1308]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:08:21.094661 ignition[1308]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:08:21.097737 ignition[1308]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:08:21.097602 unknown[1308]: wrote ssh authorized keys file for user: core Feb 13 15:08:21.103229 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:08:21.107002 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:08:21.189948 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:08:21.326381 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:08:21.330190 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:08:21.330190 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 15:08:21.634097 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:08:21.787041 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:08:21.787041 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:08:21.793902 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:08:21.793902 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:08:21.793902 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:08:21.793902 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:08:21.793902 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:08:21.793902 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:08:21.793902 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:08:21.793902 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:08:21.793902 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:08:21.793902 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:08:21.793902 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:08:21.793902 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:08:21.793902 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 15:08:22.209534 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:08:22.569491 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:08:22.569491 ignition[1308]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:08:22.577429 ignition[1308]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:08:22.577429 ignition[1308]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:08:22.577429 ignition[1308]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:08:22.577429 ignition[1308]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:08:22.577429 ignition[1308]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:08:22.577429 ignition[1308]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:08:22.577429 ignition[1308]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:08:22.577429 ignition[1308]: INFO : files: files passed Feb 13 15:08:22.577429 ignition[1308]: INFO : Ignition finished successfully Feb 13 15:08:22.600760 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:08:22.625158 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:08:22.632075 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:08:22.639891 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:08:22.640166 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:08:22.666996 initrd-setup-root-after-ignition[1336]: grep: Feb 13 15:08:22.666996 initrd-setup-root-after-ignition[1340]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:08:22.671712 initrd-setup-root-after-ignition[1336]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:08:22.671712 initrd-setup-root-after-ignition[1336]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:08:22.679598 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:08:22.683239 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:08:22.697279 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:08:22.744982 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:08:22.745740 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:08:22.750363 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:08:22.755006 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:08:22.758628 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:08:22.772215 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:08:22.798988 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:08:22.810108 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:08:22.838672 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:08:22.843045 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:08:22.845602 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:08:22.847541 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:08:22.847956 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:08:22.852356 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:08:22.854528 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:08:22.856469 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:08:22.859672 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:08:22.862565 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:08:22.866151 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:08:22.868534 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:08:22.878667 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:08:22.890002 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:08:22.893118 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:08:22.896038 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:08:22.896291 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:08:22.903695 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:08:22.905981 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:08:22.908431 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:08:22.910826 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:08:22.919497 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:08:22.919759 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:08:22.925654 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:08:22.926100 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:08:22.933122 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:08:22.933559 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:08:22.947195 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:08:22.953884 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:08:22.958989 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:08:22.959362 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:08:22.965134 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:08:22.965373 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:08:22.993324 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:08:22.996739 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:08:23.008321 ignition[1360]: INFO : Ignition 2.20.0 Feb 13 15:08:23.008321 ignition[1360]: INFO : Stage: umount Feb 13 15:08:23.012731 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:08:23.012731 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:08:23.012731 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:08:23.025596 ignition[1360]: INFO : PUT result: OK Feb 13 15:08:23.024572 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:08:23.032452 ignition[1360]: INFO : umount: umount passed Feb 13 15:08:23.035679 ignition[1360]: INFO : Ignition finished successfully Feb 13 15:08:23.038050 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:08:23.039846 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:08:23.042468 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:08:23.042652 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:08:23.047227 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:08:23.047429 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:08:23.051468 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:08:23.051591 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:08:23.054362 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:08:23.054460 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:08:23.066553 systemd[1]: Stopped target network.target - Network. Feb 13 15:08:23.068159 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:08:23.068258 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:08:23.070501 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:08:23.072177 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:08:23.075873 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:08:23.086414 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:08:23.088087 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:08:23.089956 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:08:23.090036 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:08:23.091904 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:08:23.091972 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:08:23.093893 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:08:23.095101 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:08:23.097349 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:08:23.097433 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:08:23.099383 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:08:23.099466 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:08:23.101685 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:08:23.103713 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:08:23.120668 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:08:23.120937 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:08:23.140412 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 15:08:23.141046 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:08:23.141236 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:08:23.154211 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 15:08:23.155535 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:08:23.155661 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:08:23.172029 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:08:23.174573 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:08:23.174684 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:08:23.178342 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:08:23.178426 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:08:23.191136 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:08:23.191236 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:08:23.193360 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:08:23.193444 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:08:23.201861 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:08:23.212311 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:08:23.213576 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:08:23.234143 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:08:23.234398 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:08:23.241278 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:08:23.243286 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:08:23.249072 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:08:23.249228 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:08:23.253870 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:08:23.255832 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:08:23.259504 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:08:23.259596 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:08:23.267075 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:08:23.267167 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:08:23.269353 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:08:23.269437 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:08:23.290116 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:08:23.295038 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:08:23.295162 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:08:23.305386 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:08:23.305490 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:08:23.310403 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:08:23.310505 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:08:23.313309 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:08:23.313389 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:08:23.319660 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 15:08:23.319895 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:08:23.320753 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:08:23.321106 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:08:23.327205 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:08:23.352992 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:08:23.371020 systemd[1]: Switching root. Feb 13 15:08:23.421176 systemd-journald[252]: Journal stopped Feb 13 15:08:26.898577 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Feb 13 15:08:26.898713 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:08:26.898757 kernel: SELinux: policy capability open_perms=1 Feb 13 15:08:26.904427 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:08:26.904472 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:08:26.904503 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:08:26.904534 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:08:26.904563 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:08:26.904600 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:08:26.904630 kernel: audit: type=1403 audit(1739459304.700:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:08:26.904670 systemd[1]: Successfully loaded SELinux policy in 50.974ms. Feb 13 15:08:26.904721 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 25.144ms. Feb 13 15:08:26.904756 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:08:26.908834 systemd[1]: Detected virtualization amazon. Feb 13 15:08:26.908883 systemd[1]: Detected architecture arm64. Feb 13 15:08:26.908916 systemd[1]: Detected first boot. Feb 13 15:08:26.908957 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:08:26.908987 zram_generator::config[1404]: No configuration found. Feb 13 15:08:26.909033 kernel: NET: Registered PF_VSOCK protocol family Feb 13 15:08:26.909062 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:08:26.909095 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 15:08:26.909127 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:08:26.909158 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:08:26.909190 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:08:26.909222 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:08:26.909257 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:08:26.909287 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:08:26.909318 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:08:26.909349 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:08:26.909382 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:08:26.909411 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:08:26.909439 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:08:26.909479 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:08:26.909513 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:08:26.909543 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:08:26.909574 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:08:26.909604 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:08:26.909635 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:08:26.909666 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:08:26.909694 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:08:26.909723 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:08:26.909756 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:08:26.913064 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:08:26.913103 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:08:26.913134 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:08:26.913168 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:08:26.913200 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:08:26.913234 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:08:26.913263 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:08:26.913292 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:08:26.913328 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 15:08:26.913362 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:08:26.913393 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:08:26.913422 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:08:26.913450 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:08:26.913478 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:08:26.913509 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:08:26.913538 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:08:26.913566 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:08:26.913601 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:08:26.913632 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:08:26.913664 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:08:26.913695 systemd[1]: Reached target machines.target - Containers. Feb 13 15:08:26.913726 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:08:26.913758 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:08:26.914005 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:08:26.914037 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:08:26.914074 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:08:26.914106 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:08:26.914137 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:08:26.914168 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:08:26.914197 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:08:26.914239 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:08:26.914271 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:08:26.914302 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:08:26.914331 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:08:26.914364 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:08:26.914396 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:08:26.914427 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:08:26.914456 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:08:26.914485 kernel: fuse: init (API version 7.39) Feb 13 15:08:26.914514 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:08:26.914545 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:08:26.914576 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 15:08:26.914610 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:08:26.914639 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:08:26.914669 systemd[1]: Stopped verity-setup.service. Feb 13 15:08:26.914697 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:08:26.914730 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:08:26.917035 kernel: loop: module loaded Feb 13 15:08:26.917091 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:08:26.917124 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:08:26.917158 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:08:26.917195 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:08:26.917225 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:08:26.917260 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:08:26.917290 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:08:26.917320 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:08:26.917348 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:08:26.917378 kernel: ACPI: bus type drm_connector registered Feb 13 15:08:26.917786 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:08:26.917928 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:08:26.917960 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:08:26.918036 systemd-journald[1487]: Collecting audit messages is disabled. Feb 13 15:08:26.918092 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:08:26.918122 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:08:26.918151 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:08:26.918180 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:08:26.918210 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:08:26.918239 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:08:26.918269 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:08:26.918302 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:08:26.918332 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 15:08:26.918360 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:08:26.918390 systemd-journald[1487]: Journal started Feb 13 15:08:26.918435 systemd-journald[1487]: Runtime Journal (/run/log/journal/ec25bc03f574cd2cb76564b1bdcd1fee) is 8M, max 75.3M, 67.3M free. Feb 13 15:08:26.276778 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:08:26.291136 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 15:08:26.292011 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:08:26.935896 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:08:26.955714 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:08:26.956081 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:08:26.961001 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:08:26.970831 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 15:08:26.985843 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:08:27.000650 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:08:27.004849 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:08:27.020332 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:08:27.025875 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:08:27.032725 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:08:27.035537 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:08:27.056194 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:08:27.067252 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:08:27.076260 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:08:27.085788 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:08:27.088063 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:08:27.091167 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:08:27.093799 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:08:27.097254 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:08:27.136456 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:08:27.157520 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:08:27.176485 kernel: loop0: detected capacity change from 0 to 123192 Feb 13 15:08:27.178271 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:08:27.193139 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 15:08:27.215922 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:08:27.231147 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:08:27.234988 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 15:08:27.256834 systemd-journald[1487]: Time spent on flushing to /var/log/journal/ec25bc03f574cd2cb76564b1bdcd1fee is 51.977ms for 933 entries. Feb 13 15:08:27.256834 systemd-journald[1487]: System Journal (/var/log/journal/ec25bc03f574cd2cb76564b1bdcd1fee) is 8M, max 195.6M, 187.6M free. Feb 13 15:08:27.320963 systemd-journald[1487]: Received client request to flush runtime journal. Feb 13 15:08:27.259316 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:08:27.269434 systemd-tmpfiles[1521]: ACLs are not supported, ignoring. Feb 13 15:08:27.269458 systemd-tmpfiles[1521]: ACLs are not supported, ignoring. Feb 13 15:08:27.290638 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:08:27.301265 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:08:27.305626 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:08:27.326856 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:08:27.335906 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:08:27.344391 udevadm[1550]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:08:27.366852 kernel: loop1: detected capacity change from 0 to 189592 Feb 13 15:08:27.408984 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:08:27.421012 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:08:27.437851 kernel: loop2: detected capacity change from 0 to 53784 Feb 13 15:08:27.476853 systemd-tmpfiles[1563]: ACLs are not supported, ignoring. Feb 13 15:08:27.476904 systemd-tmpfiles[1563]: ACLs are not supported, ignoring. Feb 13 15:08:27.491691 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:08:27.574884 kernel: loop3: detected capacity change from 0 to 113512 Feb 13 15:08:27.687834 kernel: loop4: detected capacity change from 0 to 123192 Feb 13 15:08:27.708813 kernel: loop5: detected capacity change from 0 to 189592 Feb 13 15:08:27.742582 kernel: loop6: detected capacity change from 0 to 53784 Feb 13 15:08:27.759813 kernel: loop7: detected capacity change from 0 to 113512 Feb 13 15:08:27.776642 (sd-merge)[1568]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 15:08:27.778214 (sd-merge)[1568]: Merged extensions into '/usr'. Feb 13 15:08:27.786422 systemd[1]: Reload requested from client PID 1520 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:08:27.786582 systemd[1]: Reloading... Feb 13 15:08:27.923990 zram_generator::config[1592]: No configuration found. Feb 13 15:08:28.305209 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:08:28.460637 systemd[1]: Reloading finished in 673 ms. Feb 13 15:08:28.483225 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:08:28.486468 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:08:28.500161 systemd[1]: Starting ensure-sysext.service... Feb 13 15:08:28.504167 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:08:28.516150 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:08:28.555886 systemd[1]: Reload requested from client PID 1648 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:08:28.555920 systemd[1]: Reloading... Feb 13 15:08:28.610634 systemd-tmpfiles[1649]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:08:28.611186 systemd-tmpfiles[1649]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:08:28.615878 systemd-tmpfiles[1649]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:08:28.616582 systemd-tmpfiles[1649]: ACLs are not supported, ignoring. Feb 13 15:08:28.616741 systemd-tmpfiles[1649]: ACLs are not supported, ignoring. Feb 13 15:08:28.625289 systemd-udevd[1650]: Using default interface naming scheme 'v255'. Feb 13 15:08:28.640678 systemd-tmpfiles[1649]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:08:28.640704 systemd-tmpfiles[1649]: Skipping /boot Feb 13 15:08:28.708053 systemd-tmpfiles[1649]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:08:28.708083 systemd-tmpfiles[1649]: Skipping /boot Feb 13 15:08:28.840822 zram_generator::config[1703]: No configuration found. Feb 13 15:08:28.931058 (udev-worker)[1694]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:08:29.024618 ldconfig[1516]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:08:29.271350 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:08:29.310912 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1695) Feb 13 15:08:29.473112 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:08:29.473385 systemd[1]: Reloading finished in 916 ms. Feb 13 15:08:29.498719 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:08:29.504949 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:08:29.543132 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:08:29.582545 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:08:29.601634 systemd[1]: Finished ensure-sysext.service. Feb 13 15:08:29.650681 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:08:29.664055 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:08:29.676198 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:08:29.678742 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:08:29.688679 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:08:29.704912 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:08:29.712138 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:08:29.718068 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:08:29.724098 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:08:29.726339 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:08:29.733920 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:08:29.736199 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:08:29.738511 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:08:29.749206 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:08:29.762114 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:08:29.764332 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:08:29.775744 lvm[1855]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:08:29.776058 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:08:29.783097 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:08:29.787420 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:08:29.790892 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:08:29.817071 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:08:29.835649 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:08:29.836344 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:08:29.839915 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:08:29.844536 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:08:29.844999 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:08:29.856905 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:08:29.861356 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:08:29.874217 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:08:29.877236 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:08:29.897495 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:08:29.898015 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:08:29.901176 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:08:29.909647 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:08:29.944892 augenrules[1892]: No rules Feb 13 15:08:29.946575 lvm[1886]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:08:29.947404 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:08:29.947922 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:08:29.954498 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:08:29.969184 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:08:29.994593 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:08:29.995598 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:08:30.010631 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:08:30.026307 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:08:30.065314 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:08:30.091640 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:08:30.199843 systemd-networkd[1872]: lo: Link UP Feb 13 15:08:30.199860 systemd-networkd[1872]: lo: Gained carrier Feb 13 15:08:30.203181 systemd-resolved[1874]: Positive Trust Anchors: Feb 13 15:08:30.203215 systemd-resolved[1874]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:08:30.203279 systemd-resolved[1874]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:08:30.204319 systemd-networkd[1872]: Enumeration completed Feb 13 15:08:30.204502 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:08:30.209630 systemd-networkd[1872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:08:30.209653 systemd-networkd[1872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:08:30.211754 systemd-networkd[1872]: eth0: Link UP Feb 13 15:08:30.212062 systemd-networkd[1872]: eth0: Gained carrier Feb 13 15:08:30.212095 systemd-networkd[1872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:08:30.213112 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 15:08:30.221256 systemd-resolved[1874]: Defaulting to hostname 'linux'. Feb 13 15:08:30.229932 systemd-networkd[1872]: eth0: DHCPv4 address 172.31.30.142/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:08:30.230293 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:08:30.232839 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:08:30.248710 systemd[1]: Reached target network.target - Network. Feb 13 15:08:30.251934 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:08:30.255116 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:08:30.258358 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:08:30.261929 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:08:30.265206 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:08:30.267852 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:08:30.270390 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:08:30.273131 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:08:30.273195 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:08:30.280318 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:08:30.284190 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:08:30.290019 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:08:30.297588 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 15:08:30.300683 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 15:08:30.303250 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 15:08:30.317369 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:08:30.320463 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 15:08:30.324748 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 15:08:30.327561 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:08:30.330695 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:08:30.332676 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:08:30.334721 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:08:30.334937 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:08:30.341067 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:08:30.351582 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:08:30.357533 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:08:30.363476 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:08:30.372510 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:08:30.374547 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:08:30.377252 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:08:30.385222 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 15:08:30.400040 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:08:30.404503 jq[1923]: false Feb 13 15:08:30.415389 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 15:08:30.424283 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:08:30.431121 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:08:30.447065 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:08:30.452387 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:08:30.453417 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:08:30.456488 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:08:30.470001 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:08:30.483576 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:08:30.484179 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:08:30.563498 update_engine[1932]: I20250213 15:08:30.563359 1932 main.cc:92] Flatcar Update Engine starting Feb 13 15:08:30.571320 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:08:30.574935 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:08:30.586819 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:08:30.608814 jq[1933]: true Feb 13 15:08:30.608711 dbus-daemon[1922]: [system] SELinux support is enabled Feb 13 15:08:30.617668 (ntainerd)[1955]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:08:30.625993 tar[1937]: linux-arm64/helm Feb 13 15:08:30.627094 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:08:30.640330 dbus-daemon[1922]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1872 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 15:08:30.642422 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:08:30.642470 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:08:30.646889 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:08:30.646929 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:08:30.655708 ntpd[1926]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:12 UTC 2025 (1): Starting Feb 13 15:08:30.660280 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:12 UTC 2025 (1): Starting Feb 13 15:08:30.660280 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:08:30.660280 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: ---------------------------------------------------- Feb 13 15:08:30.660280 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:08:30.660280 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:08:30.660280 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: corporation. Support and training for ntp-4 are Feb 13 15:08:30.660280 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: available at https://www.nwtime.org/support Feb 13 15:08:30.660280 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: ---------------------------------------------------- Feb 13 15:08:30.656834 ntpd[1926]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:08:30.656871 ntpd[1926]: ---------------------------------------------------- Feb 13 15:08:30.661186 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: proto: precision = 0.108 usec (-23) Feb 13 15:08:30.656891 ntpd[1926]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:08:30.656909 ntpd[1926]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:08:30.656927 ntpd[1926]: corporation. Support and training for ntp-4 are Feb 13 15:08:30.656945 ntpd[1926]: available at https://www.nwtime.org/support Feb 13 15:08:30.656963 ntpd[1926]: ---------------------------------------------------- Feb 13 15:08:30.660842 ntpd[1926]: proto: precision = 0.108 usec (-23) Feb 13 15:08:30.662956 ntpd[1926]: basedate set to 2025-02-01 Feb 13 15:08:30.675497 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: basedate set to 2025-02-01 Feb 13 15:08:30.675497 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: gps base set to 2025-02-02 (week 2352) Feb 13 15:08:30.675497 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:08:30.675497 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:08:30.675497 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:08:30.675497 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: Listen normally on 3 eth0 172.31.30.142:123 Feb 13 15:08:30.675497 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: Listen normally on 4 lo [::1]:123 Feb 13 15:08:30.675497 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: bind(21) AF_INET6 fe80::480:6bff:fea6:f697%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:08:30.675497 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: unable to create socket on eth0 (5) for fe80::480:6bff:fea6:f697%2#123 Feb 13 15:08:30.675497 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: failed to init interface for address fe80::480:6bff:fea6:f697%2 Feb 13 15:08:30.675497 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: Listening on routing socket on fd #21 for interface updates Feb 13 15:08:30.678162 extend-filesystems[1924]: Found loop4 Feb 13 15:08:30.678162 extend-filesystems[1924]: Found loop5 Feb 13 15:08:30.678162 extend-filesystems[1924]: Found loop6 Feb 13 15:08:30.678162 extend-filesystems[1924]: Found loop7 Feb 13 15:08:30.678162 extend-filesystems[1924]: Found nvme0n1 Feb 13 15:08:30.678162 extend-filesystems[1924]: Found nvme0n1p1 Feb 13 15:08:30.678162 extend-filesystems[1924]: Found nvme0n1p2 Feb 13 15:08:30.678162 extend-filesystems[1924]: Found nvme0n1p3 Feb 13 15:08:30.678162 extend-filesystems[1924]: Found usr Feb 13 15:08:30.678162 extend-filesystems[1924]: Found nvme0n1p4 Feb 13 15:08:30.678162 extend-filesystems[1924]: Found nvme0n1p6 Feb 13 15:08:30.678162 extend-filesystems[1924]: Found nvme0n1p7 Feb 13 15:08:30.678162 extend-filesystems[1924]: Found nvme0n1p9 Feb 13 15:08:30.678162 extend-filesystems[1924]: Checking size of /dev/nvme0n1p9 Feb 13 15:08:30.789581 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 15:08:30.663002 ntpd[1926]: gps base set to 2025-02-02 (week 2352) Feb 13 15:08:30.693057 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:08:30.789808 extend-filesystems[1924]: Resized partition /dev/nvme0n1p9 Feb 13 15:08:30.794003 update_engine[1932]: I20250213 15:08:30.686758 1932 update_check_scheduler.cc:74] Next update check in 3m3s Feb 13 15:08:30.794150 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:08:30.794150 ntpd[1926]: 13 Feb 15:08:30 ntpd[1926]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:08:30.669839 ntpd[1926]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:08:30.693486 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:08:30.794387 extend-filesystems[1972]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:08:30.669923 ntpd[1926]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:08:30.698731 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:08:30.799689 jq[1961]: true Feb 13 15:08:30.670191 ntpd[1926]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:08:30.715122 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 15:08:30.670251 ntpd[1926]: Listen normally on 3 eth0 172.31.30.142:123 Feb 13 15:08:30.730295 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:08:30.670320 ntpd[1926]: Listen normally on 4 lo [::1]:123 Feb 13 15:08:30.733383 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 15:08:30.670392 ntpd[1926]: bind(21) AF_INET6 fe80::480:6bff:fea6:f697%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:08:30.670429 ntpd[1926]: unable to create socket on eth0 (5) for fe80::480:6bff:fea6:f697%2#123 Feb 13 15:08:30.670455 ntpd[1926]: failed to init interface for address fe80::480:6bff:fea6:f697%2 Feb 13 15:08:30.670504 ntpd[1926]: Listening on routing socket on fd #21 for interface updates Feb 13 15:08:30.679144 dbus-daemon[1922]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 15:08:30.682834 ntpd[1926]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:08:30.682880 ntpd[1926]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:08:30.915812 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 15:08:30.959667 systemd-logind[1931]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:08:30.959804 systemd-logind[1931]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 15:08:30.961167 systemd-logind[1931]: New seat seat0. Feb 13 15:08:30.971052 extend-filesystems[1972]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 15:08:30.971052 extend-filesystems[1972]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:08:30.971052 extend-filesystems[1972]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 15:08:30.965659 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:08:30.975130 extend-filesystems[1924]: Resized filesystem in /dev/nvme0n1p9 Feb 13 15:08:30.966423 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:08:30.973743 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:08:30.988869 coreos-metadata[1921]: Feb 13 15:08:30.988 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:08:31.003668 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1694) Feb 13 15:08:31.003760 coreos-metadata[1921]: Feb 13 15:08:30.999 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 15:08:31.011800 coreos-metadata[1921]: Feb 13 15:08:31.005 INFO Fetch successful Feb 13 15:08:31.011800 coreos-metadata[1921]: Feb 13 15:08:31.005 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 15:08:31.012419 coreos-metadata[1921]: Feb 13 15:08:31.012 INFO Fetch successful Feb 13 15:08:31.012419 coreos-metadata[1921]: Feb 13 15:08:31.012 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 15:08:31.014247 coreos-metadata[1921]: Feb 13 15:08:31.013 INFO Fetch successful Feb 13 15:08:31.014247 coreos-metadata[1921]: Feb 13 15:08:31.013 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 15:08:31.015438 coreos-metadata[1921]: Feb 13 15:08:31.015 INFO Fetch successful Feb 13 15:08:31.015438 coreos-metadata[1921]: Feb 13 15:08:31.015 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 15:08:31.017497 coreos-metadata[1921]: Feb 13 15:08:31.017 INFO Fetch failed with 404: resource not found Feb 13 15:08:31.017497 coreos-metadata[1921]: Feb 13 15:08:31.017 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 15:08:31.028988 coreos-metadata[1921]: Feb 13 15:08:31.028 INFO Fetch successful Feb 13 15:08:31.028988 coreos-metadata[1921]: Feb 13 15:08:31.028 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 15:08:31.031626 coreos-metadata[1921]: Feb 13 15:08:31.030 INFO Fetch successful Feb 13 15:08:31.031626 coreos-metadata[1921]: Feb 13 15:08:31.030 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 15:08:31.037317 coreos-metadata[1921]: Feb 13 15:08:31.033 INFO Fetch successful Feb 13 15:08:31.037317 coreos-metadata[1921]: Feb 13 15:08:31.033 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 15:08:31.037317 coreos-metadata[1921]: Feb 13 15:08:31.034 INFO Fetch successful Feb 13 15:08:31.037317 coreos-metadata[1921]: Feb 13 15:08:31.034 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 15:08:31.044413 coreos-metadata[1921]: Feb 13 15:08:31.038 INFO Fetch successful Feb 13 15:08:31.072008 bash[2011]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:08:31.089915 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:08:31.125310 systemd[1]: Starting sshkeys.service... Feb 13 15:08:31.229453 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:08:31.259324 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:08:31.306631 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:08:31.309112 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:08:31.416740 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 15:08:31.421749 dbus-daemon[1922]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 15:08:31.428285 dbus-daemon[1922]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1968 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 15:08:31.471389 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 15:08:31.506726 polkitd[2072]: Started polkitd version 121 Feb 13 15:08:31.520007 polkitd[2072]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 15:08:31.522640 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 15:08:31.520157 polkitd[2072]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 15:08:31.521516 polkitd[2072]: Finished loading, compiling and executing 2 rules Feb 13 15:08:31.522337 dbus-daemon[1922]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 15:08:31.524949 polkitd[2072]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 15:08:31.575082 systemd-resolved[1874]: System hostname changed to 'ip-172-31-30-142'. Feb 13 15:08:31.575088 systemd-hostnamed[1968]: Hostname set to (transient) Feb 13 15:08:31.657186 locksmithd[1971]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:08:31.658297 ntpd[1926]: bind(24) AF_INET6 fe80::480:6bff:fea6:f697%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:08:31.658745 ntpd[1926]: 13 Feb 15:08:31 ntpd[1926]: bind(24) AF_INET6 fe80::480:6bff:fea6:f697%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:08:31.658745 ntpd[1926]: 13 Feb 15:08:31 ntpd[1926]: unable to create socket on eth0 (6) for fe80::480:6bff:fea6:f697%2#123 Feb 13 15:08:31.658745 ntpd[1926]: 13 Feb 15:08:31 ntpd[1926]: failed to init interface for address fe80::480:6bff:fea6:f697%2 Feb 13 15:08:31.658358 ntpd[1926]: unable to create socket on eth0 (6) for fe80::480:6bff:fea6:f697%2#123 Feb 13 15:08:31.658387 ntpd[1926]: failed to init interface for address fe80::480:6bff:fea6:f697%2 Feb 13 15:08:31.710804 containerd[1955]: time="2025-02-13T15:08:31.709246573Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:08:31.753634 coreos-metadata[2043]: Feb 13 15:08:31.753 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:08:31.756704 coreos-metadata[2043]: Feb 13 15:08:31.755 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 15:08:31.757267 coreos-metadata[2043]: Feb 13 15:08:31.757 INFO Fetch successful Feb 13 15:08:31.757267 coreos-metadata[2043]: Feb 13 15:08:31.757 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 15:08:31.758141 coreos-metadata[2043]: Feb 13 15:08:31.757 INFO Fetch successful Feb 13 15:08:31.763503 unknown[2043]: wrote ssh authorized keys file for user: core Feb 13 15:08:31.825867 update-ssh-keys[2121]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:08:31.829665 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:08:31.842631 systemd[1]: Finished sshkeys.service. Feb 13 15:08:31.854491 containerd[1955]: time="2025-02-13T15:08:31.854109254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:08:31.862982 containerd[1955]: time="2025-02-13T15:08:31.862088246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:08:31.862982 containerd[1955]: time="2025-02-13T15:08:31.862170038Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:08:31.862982 containerd[1955]: time="2025-02-13T15:08:31.862209974Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:08:31.862982 containerd[1955]: time="2025-02-13T15:08:31.862550990Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:08:31.862982 containerd[1955]: time="2025-02-13T15:08:31.862586246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:08:31.862982 containerd[1955]: time="2025-02-13T15:08:31.862719782Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:08:31.865557 containerd[1955]: time="2025-02-13T15:08:31.862754474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:08:31.865557 containerd[1955]: time="2025-02-13T15:08:31.864205358Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:08:31.865557 containerd[1955]: time="2025-02-13T15:08:31.864241358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:08:31.865557 containerd[1955]: time="2025-02-13T15:08:31.864272438Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:08:31.865557 containerd[1955]: time="2025-02-13T15:08:31.864299126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:08:31.865557 containerd[1955]: time="2025-02-13T15:08:31.864487034Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:08:31.865557 containerd[1955]: time="2025-02-13T15:08:31.864960698Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:08:31.865557 containerd[1955]: time="2025-02-13T15:08:31.865209950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:08:31.865557 containerd[1955]: time="2025-02-13T15:08:31.865238174Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:08:31.865557 containerd[1955]: time="2025-02-13T15:08:31.865406306Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:08:31.865557 containerd[1955]: time="2025-02-13T15:08:31.865507154Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:08:31.885811 containerd[1955]: time="2025-02-13T15:08:31.883615442Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:08:31.885811 containerd[1955]: time="2025-02-13T15:08:31.883725386Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:08:31.885811 containerd[1955]: time="2025-02-13T15:08:31.883760702Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:08:31.885811 containerd[1955]: time="2025-02-13T15:08:31.883832678Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:08:31.885811 containerd[1955]: time="2025-02-13T15:08:31.883868078Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:08:31.885811 containerd[1955]: time="2025-02-13T15:08:31.884135282Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:08:31.885811 containerd[1955]: time="2025-02-13T15:08:31.884607026Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:08:31.885811 containerd[1955]: time="2025-02-13T15:08:31.884846342Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:08:31.885811 containerd[1955]: time="2025-02-13T15:08:31.884880746Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:08:31.885811 containerd[1955]: time="2025-02-13T15:08:31.884916974Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:08:31.885811 containerd[1955]: time="2025-02-13T15:08:31.884952638Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:08:31.885811 containerd[1955]: time="2025-02-13T15:08:31.884984462Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:08:31.885811 containerd[1955]: time="2025-02-13T15:08:31.885015434Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:08:31.885811 containerd[1955]: time="2025-02-13T15:08:31.885052010Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:08:31.886431 containerd[1955]: time="2025-02-13T15:08:31.885084914Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:08:31.886431 containerd[1955]: time="2025-02-13T15:08:31.885114542Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:08:31.886431 containerd[1955]: time="2025-02-13T15:08:31.885146726Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:08:31.886431 containerd[1955]: time="2025-02-13T15:08:31.885174122Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:08:31.886431 containerd[1955]: time="2025-02-13T15:08:31.885219794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:08:31.886431 containerd[1955]: time="2025-02-13T15:08:31.885251450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:08:31.886431 containerd[1955]: time="2025-02-13T15:08:31.885281534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:08:31.886431 containerd[1955]: time="2025-02-13T15:08:31.885311138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:08:31.886431 containerd[1955]: time="2025-02-13T15:08:31.885339134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:08:31.886431 containerd[1955]: time="2025-02-13T15:08:31.885368222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:08:31.886431 containerd[1955]: time="2025-02-13T15:08:31.885397946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:08:31.886431 containerd[1955]: time="2025-02-13T15:08:31.885428150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:08:31.886431 containerd[1955]: time="2025-02-13T15:08:31.885457106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:08:31.886431 containerd[1955]: time="2025-02-13T15:08:31.885492302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:08:31.887006 containerd[1955]: time="2025-02-13T15:08:31.885519470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:08:31.887006 containerd[1955]: time="2025-02-13T15:08:31.885549050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:08:31.887006 containerd[1955]: time="2025-02-13T15:08:31.885577682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:08:31.887006 containerd[1955]: time="2025-02-13T15:08:31.885609806Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:08:31.887006 containerd[1955]: time="2025-02-13T15:08:31.885663026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:08:31.887006 containerd[1955]: time="2025-02-13T15:08:31.885700190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:08:31.887006 containerd[1955]: time="2025-02-13T15:08:31.885729230Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:08:31.892791 containerd[1955]: time="2025-02-13T15:08:31.889588310Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:08:31.892791 containerd[1955]: time="2025-02-13T15:08:31.889673906Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:08:31.892791 containerd[1955]: time="2025-02-13T15:08:31.889705574Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:08:31.892791 containerd[1955]: time="2025-02-13T15:08:31.889742402Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:08:31.892791 containerd[1955]: time="2025-02-13T15:08:31.889800050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:08:31.892791 containerd[1955]: time="2025-02-13T15:08:31.889839422Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:08:31.892791 containerd[1955]: time="2025-02-13T15:08:31.889864910Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:08:31.892791 containerd[1955]: time="2025-02-13T15:08:31.889888802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:08:31.893219 containerd[1955]: time="2025-02-13T15:08:31.890416970Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:08:31.893219 containerd[1955]: time="2025-02-13T15:08:31.890504834Z" level=info msg="Connect containerd service" Feb 13 15:08:31.893219 containerd[1955]: time="2025-02-13T15:08:31.890560166Z" level=info msg="using legacy CRI server" Feb 13 15:08:31.893219 containerd[1955]: time="2025-02-13T15:08:31.890577638Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:08:31.893219 containerd[1955]: time="2025-02-13T15:08:31.890865242Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:08:31.895973 containerd[1955]: time="2025-02-13T15:08:31.895923374Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:08:31.896362 containerd[1955]: time="2025-02-13T15:08:31.896291906Z" level=info msg="Start subscribing containerd event" Feb 13 15:08:31.896429 containerd[1955]: time="2025-02-13T15:08:31.896379758Z" level=info msg="Start recovering state" Feb 13 15:08:31.896546 containerd[1955]: time="2025-02-13T15:08:31.896504126Z" level=info msg="Start event monitor" Feb 13 15:08:31.896602 containerd[1955]: time="2025-02-13T15:08:31.896542826Z" level=info msg="Start snapshots syncer" Feb 13 15:08:31.896602 containerd[1955]: time="2025-02-13T15:08:31.896567354Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:08:31.896602 containerd[1955]: time="2025-02-13T15:08:31.896585870Z" level=info msg="Start streaming server" Feb 13 15:08:31.897219 containerd[1955]: time="2025-02-13T15:08:31.897184454Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:08:31.897533 containerd[1955]: time="2025-02-13T15:08:31.897506186Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:08:31.898094 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:08:31.901600 containerd[1955]: time="2025-02-13T15:08:31.898135778Z" level=info msg="containerd successfully booted in 0.196491s" Feb 13 15:08:32.168958 systemd-networkd[1872]: eth0: Gained IPv6LL Feb 13 15:08:32.177954 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:08:32.182603 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:08:32.194294 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 15:08:32.207103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:08:32.213832 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:08:32.307293 sshd_keygen[1974]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:08:32.331318 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:08:32.345629 tar[1937]: linux-arm64/LICENSE Feb 13 15:08:32.345629 tar[1937]: linux-arm64/README.md Feb 13 15:08:32.361899 amazon-ssm-agent[2126]: Initializing new seelog logger Feb 13 15:08:32.363833 amazon-ssm-agent[2126]: New Seelog Logger Creation Complete Feb 13 15:08:32.363833 amazon-ssm-agent[2126]: 2025/02/13 15:08:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:32.363833 amazon-ssm-agent[2126]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:32.363833 amazon-ssm-agent[2126]: 2025/02/13 15:08:32 processing appconfig overrides Feb 13 15:08:32.364720 amazon-ssm-agent[2126]: 2025/02/13 15:08:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:32.364841 amazon-ssm-agent[2126]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:32.365049 amazon-ssm-agent[2126]: 2025/02/13 15:08:32 processing appconfig overrides Feb 13 15:08:32.365422 amazon-ssm-agent[2126]: 2025/02/13 15:08:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:32.365505 amazon-ssm-agent[2126]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:32.365684 amazon-ssm-agent[2126]: 2025/02/13 15:08:32 processing appconfig overrides Feb 13 15:08:32.366532 amazon-ssm-agent[2126]: 2025-02-13 15:08:32 INFO Proxy environment variables: Feb 13 15:08:32.369319 amazon-ssm-agent[2126]: 2025/02/13 15:08:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:32.370810 amazon-ssm-agent[2126]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:32.370810 amazon-ssm-agent[2126]: 2025/02/13 15:08:32 processing appconfig overrides Feb 13 15:08:32.373598 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:08:32.432620 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:08:32.450503 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:08:32.460526 systemd[1]: Started sshd@0-172.31.30.142:22-139.178.68.195:55722.service - OpenSSH per-connection server daemon (139.178.68.195:55722). Feb 13 15:08:32.467530 amazon-ssm-agent[2126]: 2025-02-13 15:08:32 INFO http_proxy: Feb 13 15:08:32.493303 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:08:32.493845 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:08:32.505570 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:08:32.555061 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:08:32.565483 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:08:32.571867 amazon-ssm-agent[2126]: 2025-02-13 15:08:32 INFO no_proxy: Feb 13 15:08:32.571714 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:08:32.575428 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:08:32.669330 amazon-ssm-agent[2126]: 2025-02-13 15:08:32 INFO https_proxy: Feb 13 15:08:32.770929 amazon-ssm-agent[2126]: 2025-02-13 15:08:32 INFO Checking if agent identity type OnPrem can be assumed Feb 13 15:08:32.801710 sshd[2154]: Accepted publickey for core from 139.178.68.195 port 55722 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:08:32.807157 sshd-session[2154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:08:32.826683 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:08:32.839387 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:08:32.870927 amazon-ssm-agent[2126]: 2025-02-13 15:08:32 INFO Checking if agent identity type EC2 can be assumed Feb 13 15:08:32.871393 systemd-logind[1931]: New session 1 of user core. Feb 13 15:08:32.890196 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:08:32.907487 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:08:32.924680 (systemd)[2167]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:08:32.931748 systemd-logind[1931]: New session c1 of user core. Feb 13 15:08:32.968755 amazon-ssm-agent[2126]: 2025-02-13 15:08:32 INFO Agent will take identity from EC2 Feb 13 15:08:33.070845 amazon-ssm-agent[2126]: 2025-02-13 15:08:32 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:08:33.178870 amazon-ssm-agent[2126]: 2025-02-13 15:08:32 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:08:33.278276 amazon-ssm-agent[2126]: 2025-02-13 15:08:32 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:08:33.360422 systemd[2167]: Queued start job for default target default.target. Feb 13 15:08:33.371212 systemd[2167]: Created slice app.slice - User Application Slice. Feb 13 15:08:33.371282 systemd[2167]: Reached target paths.target - Paths. Feb 13 15:08:33.371371 systemd[2167]: Reached target timers.target - Timers. Feb 13 15:08:33.381071 amazon-ssm-agent[2126]: 2025-02-13 15:08:32 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 15:08:33.381054 systemd[2167]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:08:33.417232 systemd[2167]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:08:33.417495 systemd[2167]: Reached target sockets.target - Sockets. Feb 13 15:08:33.417602 systemd[2167]: Reached target basic.target - Basic System. Feb 13 15:08:33.417690 systemd[2167]: Reached target default.target - Main User Target. Feb 13 15:08:33.417727 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:08:33.417750 systemd[2167]: Startup finished in 470ms. Feb 13 15:08:33.430707 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:08:33.481070 amazon-ssm-agent[2126]: 2025-02-13 15:08:32 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 15:08:33.582749 amazon-ssm-agent[2126]: 2025-02-13 15:08:32 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 15:08:33.616728 systemd[1]: Started sshd@1-172.31.30.142:22-139.178.68.195:50780.service - OpenSSH per-connection server daemon (139.178.68.195:50780). Feb 13 15:08:33.687291 amazon-ssm-agent[2126]: 2025-02-13 15:08:32 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 15:08:33.688075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:08:33.691706 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:08:33.695935 systemd[1]: Startup finished in 1.301s (kernel) + 9.901s (initrd) + 9.043s (userspace) = 20.247s. Feb 13 15:08:33.713547 (kubelet)[2184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:08:33.770437 amazon-ssm-agent[2126]: 2025-02-13 15:08:32 INFO [Registrar] Starting registrar module Feb 13 15:08:33.770437 amazon-ssm-agent[2126]: 2025-02-13 15:08:32 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 15:08:33.770437 amazon-ssm-agent[2126]: 2025-02-13 15:08:33 INFO [EC2Identity] EC2 registration was successful. Feb 13 15:08:33.770437 amazon-ssm-agent[2126]: 2025-02-13 15:08:33 INFO [CredentialRefresher] credentialRefresher has started Feb 13 15:08:33.770437 amazon-ssm-agent[2126]: 2025-02-13 15:08:33 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 15:08:33.770437 amazon-ssm-agent[2126]: 2025-02-13 15:08:33 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 15:08:33.788191 amazon-ssm-agent[2126]: 2025-02-13 15:08:33 INFO [CredentialRefresher] Next credential rotation will be in 30.016658314166666 minutes Feb 13 15:08:33.849356 sshd[2179]: Accepted publickey for core from 139.178.68.195 port 50780 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:08:33.851630 sshd-session[2179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:08:33.864924 systemd-logind[1931]: New session 2 of user core. Feb 13 15:08:33.869235 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:08:34.001420 sshd[2194]: Connection closed by 139.178.68.195 port 50780 Feb 13 15:08:34.002546 sshd-session[2179]: pam_unix(sshd:session): session closed for user core Feb 13 15:08:34.011116 systemd[1]: sshd@1-172.31.30.142:22-139.178.68.195:50780.service: Deactivated successfully. Feb 13 15:08:34.015619 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:08:34.017540 systemd-logind[1931]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:08:34.020643 systemd-logind[1931]: Removed session 2. Feb 13 15:08:34.047112 systemd[1]: Started sshd@2-172.31.30.142:22-139.178.68.195:50792.service - OpenSSH per-connection server daemon (139.178.68.195:50792). Feb 13 15:08:34.230283 sshd[2200]: Accepted publickey for core from 139.178.68.195 port 50792 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:08:34.233962 sshd-session[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:08:34.245594 systemd-logind[1931]: New session 3 of user core. Feb 13 15:08:34.250087 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:08:34.372547 sshd[2203]: Connection closed by 139.178.68.195 port 50792 Feb 13 15:08:34.372326 sshd-session[2200]: pam_unix(sshd:session): session closed for user core Feb 13 15:08:34.381927 systemd[1]: sshd@2-172.31.30.142:22-139.178.68.195:50792.service: Deactivated successfully. Feb 13 15:08:34.390320 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:08:34.396613 systemd-logind[1931]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:08:34.417340 systemd[1]: Started sshd@3-172.31.30.142:22-139.178.68.195:50794.service - OpenSSH per-connection server daemon (139.178.68.195:50794). Feb 13 15:08:34.419671 systemd-logind[1931]: Removed session 3. Feb 13 15:08:34.444843 kubelet[2184]: E0213 15:08:34.443438 2184 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:08:34.447671 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:08:34.448036 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:08:34.449672 systemd[1]: kubelet.service: Consumed 1.285s CPU time, 230.1M memory peak. Feb 13 15:08:34.615686 sshd[2208]: Accepted publickey for core from 139.178.68.195 port 50794 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:08:34.619205 sshd-session[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:08:34.627973 systemd-logind[1931]: New session 4 of user core. Feb 13 15:08:34.641052 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:08:34.657795 ntpd[1926]: Listen normally on 7 eth0 [fe80::480:6bff:fea6:f697%2]:123 Feb 13 15:08:34.658411 ntpd[1926]: 13 Feb 15:08:34 ntpd[1926]: Listen normally on 7 eth0 [fe80::480:6bff:fea6:f697%2]:123 Feb 13 15:08:34.775831 sshd[2212]: Connection closed by 139.178.68.195 port 50794 Feb 13 15:08:34.777184 sshd-session[2208]: pam_unix(sshd:session): session closed for user core Feb 13 15:08:34.784665 systemd[1]: sshd@3-172.31.30.142:22-139.178.68.195:50794.service: Deactivated successfully. Feb 13 15:08:34.789670 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:08:34.793408 systemd-logind[1931]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:08:34.799872 amazon-ssm-agent[2126]: 2025-02-13 15:08:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 15:08:34.813363 systemd-logind[1931]: Removed session 4. Feb 13 15:08:34.820975 systemd[1]: Started sshd@4-172.31.30.142:22-139.178.68.195:50798.service - OpenSSH per-connection server daemon (139.178.68.195:50798). Feb 13 15:08:34.902710 amazon-ssm-agent[2126]: 2025-02-13 15:08:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2218) started Feb 13 15:08:35.004693 amazon-ssm-agent[2126]: 2025-02-13 15:08:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 15:08:35.011877 sshd[2219]: Accepted publickey for core from 139.178.68.195 port 50798 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:08:35.015977 sshd-session[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:08:35.026340 systemd-logind[1931]: New session 5 of user core. Feb 13 15:08:35.034092 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:08:35.154676 sudo[2232]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:08:35.155697 sudo[2232]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:08:35.176403 sudo[2232]: pam_unix(sudo:session): session closed for user root Feb 13 15:08:35.200889 sshd[2231]: Connection closed by 139.178.68.195 port 50798 Feb 13 15:08:35.200607 sshd-session[2219]: pam_unix(sshd:session): session closed for user core Feb 13 15:08:35.207290 systemd[1]: sshd@4-172.31.30.142:22-139.178.68.195:50798.service: Deactivated successfully. Feb 13 15:08:35.211994 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:08:35.214168 systemd-logind[1931]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:08:35.215922 systemd-logind[1931]: Removed session 5. Feb 13 15:08:35.254235 systemd[1]: Started sshd@5-172.31.30.142:22-139.178.68.195:50810.service - OpenSSH per-connection server daemon (139.178.68.195:50810). Feb 13 15:08:35.434803 sshd[2238]: Accepted publickey for core from 139.178.68.195 port 50810 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:08:35.437803 sshd-session[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:08:35.446154 systemd-logind[1931]: New session 6 of user core. Feb 13 15:08:35.458070 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:08:35.563736 sudo[2242]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:08:35.564408 sudo[2242]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:08:35.570631 sudo[2242]: pam_unix(sudo:session): session closed for user root Feb 13 15:08:35.580755 sudo[2241]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:08:35.582006 sudo[2241]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:08:35.608217 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:08:35.660700 augenrules[2264]: No rules Feb 13 15:08:35.663134 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:08:35.664124 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:08:35.667383 sudo[2241]: pam_unix(sudo:session): session closed for user root Feb 13 15:08:35.690824 sshd[2240]: Connection closed by 139.178.68.195 port 50810 Feb 13 15:08:35.691547 sshd-session[2238]: pam_unix(sshd:session): session closed for user core Feb 13 15:08:35.698175 systemd[1]: sshd@5-172.31.30.142:22-139.178.68.195:50810.service: Deactivated successfully. Feb 13 15:08:35.698664 systemd-logind[1931]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:08:35.702172 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:08:35.706037 systemd-logind[1931]: Removed session 6. Feb 13 15:08:35.737483 systemd[1]: Started sshd@6-172.31.30.142:22-139.178.68.195:50824.service - OpenSSH per-connection server daemon (139.178.68.195:50824). Feb 13 15:08:35.913239 sshd[2273]: Accepted publickey for core from 139.178.68.195 port 50824 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:08:35.915702 sshd-session[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:08:35.925988 systemd-logind[1931]: New session 7 of user core. Feb 13 15:08:35.933028 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:08:36.040363 sudo[2276]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:08:36.041040 sudo[2276]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:08:36.977374 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:08:36.994292 (dockerd)[2293]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:08:37.531067 dockerd[2293]: time="2025-02-13T15:08:37.530173686Z" level=info msg="Starting up" Feb 13 15:08:37.181335 systemd-resolved[1874]: Clock change detected. Flushing caches. Feb 13 15:08:37.220984 systemd-journald[1487]: Time jumped backwards, rotating. Feb 13 15:08:37.718003 dockerd[2293]: time="2025-02-13T15:08:37.717466270Z" level=info msg="Loading containers: start." Feb 13 15:08:38.026986 kernel: Initializing XFRM netlink socket Feb 13 15:08:38.082979 (udev-worker)[2406]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:08:38.177671 systemd-networkd[1872]: docker0: Link UP Feb 13 15:08:38.218308 dockerd[2293]: time="2025-02-13T15:08:38.218231373Z" level=info msg="Loading containers: done." Feb 13 15:08:38.242375 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2168940279-merged.mount: Deactivated successfully. Feb 13 15:08:38.274419 dockerd[2293]: time="2025-02-13T15:08:38.274285449Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:08:38.274419 dockerd[2293]: time="2025-02-13T15:08:38.274414905Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:08:38.274676 dockerd[2293]: time="2025-02-13T15:08:38.274638861Z" level=info msg="Daemon has completed initialization" Feb 13 15:08:38.355066 dockerd[2293]: time="2025-02-13T15:08:38.354802377Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:08:38.355129 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:08:39.281197 containerd[1955]: time="2025-02-13T15:08:39.280852534Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 15:08:40.057488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount132319556.mount: Deactivated successfully. Feb 13 15:08:41.683489 containerd[1955]: time="2025-02-13T15:08:41.683406110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:41.685539 containerd[1955]: time="2025-02-13T15:08:41.685458242Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620375" Feb 13 15:08:41.686980 containerd[1955]: time="2025-02-13T15:08:41.686878850Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:41.692719 containerd[1955]: time="2025-02-13T15:08:41.692668442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:41.697207 containerd[1955]: time="2025-02-13T15:08:41.696632006Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 2.415429348s" Feb 13 15:08:41.697207 containerd[1955]: time="2025-02-13T15:08:41.696708794Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 15:08:41.699726 containerd[1955]: time="2025-02-13T15:08:41.699057242Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 15:08:43.234175 containerd[1955]: time="2025-02-13T15:08:43.234038558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:43.236302 containerd[1955]: time="2025-02-13T15:08:43.235786094Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471773" Feb 13 15:08:43.238347 containerd[1955]: time="2025-02-13T15:08:43.238243022Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:43.244676 containerd[1955]: time="2025-02-13T15:08:43.244580594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:43.247432 containerd[1955]: time="2025-02-13T15:08:43.247163162Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 1.547979428s" Feb 13 15:08:43.247432 containerd[1955]: time="2025-02-13T15:08:43.247230950Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 15:08:43.248806 containerd[1955]: time="2025-02-13T15:08:43.248538974Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 15:08:44.175177 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:08:44.184551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:08:45.479077 containerd[1955]: time="2025-02-13T15:08:45.478887629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:45.483231 containerd[1955]: time="2025-02-13T15:08:45.481596449Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024540" Feb 13 15:08:45.495089 containerd[1955]: time="2025-02-13T15:08:45.495032657Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:45.509861 containerd[1955]: time="2025-02-13T15:08:45.509795801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:45.512496 containerd[1955]: time="2025-02-13T15:08:45.512426333Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 2.263830527s" Feb 13 15:08:45.512712 containerd[1955]: time="2025-02-13T15:08:45.512677385Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 15:08:45.513873 containerd[1955]: time="2025-02-13T15:08:45.513830465Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 15:08:45.514791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:08:45.517087 (kubelet)[2557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:08:45.598523 kubelet[2557]: E0213 15:08:45.598428 2557 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:08:45.604639 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:08:45.605259 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:08:45.605814 systemd[1]: kubelet.service: Consumed 300ms CPU time, 96M memory peak. Feb 13 15:08:46.934100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1562242994.mount: Deactivated successfully. Feb 13 15:08:47.461973 containerd[1955]: time="2025-02-13T15:08:47.461893735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:47.463478 containerd[1955]: time="2025-02-13T15:08:47.463391515Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769256" Feb 13 15:08:47.464646 containerd[1955]: time="2025-02-13T15:08:47.464560855Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:47.468702 containerd[1955]: time="2025-02-13T15:08:47.468598375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:47.470182 containerd[1955]: time="2025-02-13T15:08:47.469934875Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.955886934s" Feb 13 15:08:47.470182 containerd[1955]: time="2025-02-13T15:08:47.470013547Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 15:08:47.471241 containerd[1955]: time="2025-02-13T15:08:47.470782903Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:08:48.353857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3542363355.mount: Deactivated successfully. Feb 13 15:08:49.486713 containerd[1955]: time="2025-02-13T15:08:49.486653241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:49.489670 containerd[1955]: time="2025-02-13T15:08:49.489565545Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 15:08:49.491350 containerd[1955]: time="2025-02-13T15:08:49.491280201Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:49.497707 containerd[1955]: time="2025-02-13T15:08:49.497609541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:49.500571 containerd[1955]: time="2025-02-13T15:08:49.500387613Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.029558126s" Feb 13 15:08:49.500571 containerd[1955]: time="2025-02-13T15:08:49.500439561Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:08:49.502111 containerd[1955]: time="2025-02-13T15:08:49.502040481Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 15:08:49.999842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2726634613.mount: Deactivated successfully. Feb 13 15:08:50.019350 containerd[1955]: time="2025-02-13T15:08:50.019273783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:50.020958 containerd[1955]: time="2025-02-13T15:08:50.020863219Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Feb 13 15:08:50.022639 containerd[1955]: time="2025-02-13T15:08:50.022569967Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:50.028974 containerd[1955]: time="2025-02-13T15:08:50.027481387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:50.029440 containerd[1955]: time="2025-02-13T15:08:50.029375131Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 527.131538ms" Feb 13 15:08:50.029580 containerd[1955]: time="2025-02-13T15:08:50.029551531Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 15:08:50.030429 containerd[1955]: time="2025-02-13T15:08:50.030320095Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 15:08:50.625663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4010138781.mount: Deactivated successfully. Feb 13 15:08:54.383568 containerd[1955]: time="2025-02-13T15:08:54.383048521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:54.388689 containerd[1955]: time="2025-02-13T15:08:54.388602805Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:54.390996 containerd[1955]: time="2025-02-13T15:08:54.388991677Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Feb 13 15:08:54.398555 containerd[1955]: time="2025-02-13T15:08:54.398478625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:54.403564 containerd[1955]: time="2025-02-13T15:08:54.403507741Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.37287483s" Feb 13 15:08:54.403749 containerd[1955]: time="2025-02-13T15:08:54.403719697Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 15:08:55.675395 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:08:55.685561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:08:56.592478 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:08:56.609749 (kubelet)[2702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:08:56.691620 kubelet[2702]: E0213 15:08:56.691558 2702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:08:56.697335 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:08:56.698185 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:08:56.699213 systemd[1]: kubelet.service: Consumed 281ms CPU time, 93.9M memory peak. Feb 13 15:08:59.794583 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:08:59.795640 systemd[1]: kubelet.service: Consumed 281ms CPU time, 93.9M memory peak. Feb 13 15:08:59.812482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:08:59.872958 systemd[1]: Reload requested from client PID 2717 ('systemctl') (unit session-7.scope)... Feb 13 15:08:59.872986 systemd[1]: Reloading... Feb 13 15:09:00.102013 zram_generator::config[2770]: No configuration found. Feb 13 15:09:00.332873 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:09:00.557100 systemd[1]: Reloading finished in 683 ms. Feb 13 15:09:00.661346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:09:00.670721 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:09:00.674166 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:09:00.674677 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:09:00.674762 systemd[1]: kubelet.service: Consumed 199ms CPU time, 81.5M memory peak. Feb 13 15:09:00.681881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:09:01.121358 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 15:09:01.644210 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:09:01.655611 (kubelet)[2831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:09:01.729389 kubelet[2831]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:09:01.729389 kubelet[2831]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:09:01.729389 kubelet[2831]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:09:01.729933 kubelet[2831]: I0213 15:09:01.729516 2831 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:09:03.343547 kubelet[2831]: I0213 15:09:03.343495 2831 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:09:03.346182 kubelet[2831]: I0213 15:09:03.344112 2831 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:09:03.346182 kubelet[2831]: I0213 15:09:03.344564 2831 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:09:03.377502 kubelet[2831]: E0213 15:09:03.377411 2831 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.30.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.142:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:09:03.382273 kubelet[2831]: I0213 15:09:03.381924 2831 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:09:03.399475 kubelet[2831]: E0213 15:09:03.399402 2831 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:09:03.399475 kubelet[2831]: I0213 15:09:03.399462 2831 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:09:03.406237 kubelet[2831]: I0213 15:09:03.406159 2831 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:09:03.407556 kubelet[2831]: I0213 15:09:03.407504 2831 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:09:03.407917 kubelet[2831]: I0213 15:09:03.407843 2831 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:09:03.408338 kubelet[2831]: I0213 15:09:03.407911 2831 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-142","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:09:03.408566 kubelet[2831]: I0213 15:09:03.408407 2831 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:09:03.408566 kubelet[2831]: I0213 15:09:03.408437 2831 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:09:03.408793 kubelet[2831]: I0213 15:09:03.408758 2831 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:09:03.410977 kubelet[2831]: I0213 15:09:03.410916 2831 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:09:03.411079 kubelet[2831]: I0213 15:09:03.410986 2831 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:09:03.411079 kubelet[2831]: I0213 15:09:03.411065 2831 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:09:03.411199 kubelet[2831]: I0213 15:09:03.411092 2831 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:09:03.418916 kubelet[2831]: I0213 15:09:03.418620 2831 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:09:03.422616 kubelet[2831]: I0213 15:09:03.422556 2831 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:09:03.424976 kubelet[2831]: W0213 15:09:03.424158 2831 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:09:03.425543 kubelet[2831]: I0213 15:09:03.425509 2831 server.go:1269] "Started kubelet" Feb 13 15:09:03.426027 kubelet[2831]: W0213 15:09:03.425915 2831 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-142&limit=500&resourceVersion=0": dial tcp 172.31.30.142:6443: connect: connection refused Feb 13 15:09:03.426203 kubelet[2831]: E0213 15:09:03.426167 2831 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-142&limit=500&resourceVersion=0\": dial tcp 172.31.30.142:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:09:03.434034 kubelet[2831]: W0213 15:09:03.433902 2831 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.142:6443: connect: connection refused Feb 13 15:09:03.434201 kubelet[2831]: E0213 15:09:03.434047 2831 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.142:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:09:03.434201 kubelet[2831]: I0213 15:09:03.434123 2831 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:09:03.435348 kubelet[2831]: I0213 15:09:03.435250 2831 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:09:03.436158 kubelet[2831]: I0213 15:09:03.436126 2831 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:09:03.438972 kubelet[2831]: I0213 15:09:03.438512 2831 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:09:03.441799 kubelet[2831]: E0213 15:09:03.438431 2831 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.142:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.142:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-142.1823cd0e8a1befde default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-142,UID:ip-172-31-30-142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-142,},FirstTimestamp:2025-02-13 15:09:03.425466334 +0000 UTC m=+1.763191222,LastTimestamp:2025-02-13 15:09:03.425466334 +0000 UTC m=+1.763191222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-142,}" Feb 13 15:09:03.445011 kubelet[2831]: I0213 15:09:03.444206 2831 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:09:03.446034 kubelet[2831]: I0213 15:09:03.445915 2831 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:09:03.452156 kubelet[2831]: I0213 15:09:03.450152 2831 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:09:03.452156 kubelet[2831]: E0213 15:09:03.450568 2831 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-30-142\" not found" Feb 13 15:09:03.452156 kubelet[2831]: I0213 15:09:03.451026 2831 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:09:03.452156 kubelet[2831]: I0213 15:09:03.451122 2831 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:09:03.455780 kubelet[2831]: W0213 15:09:03.455661 2831 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.142:6443: connect: connection refused Feb 13 15:09:03.456009 kubelet[2831]: E0213 15:09:03.455797 2831 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.142:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:09:03.456325 kubelet[2831]: E0213 15:09:03.456209 2831 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-142?timeout=10s\": dial tcp 172.31.30.142:6443: connect: connection refused" interval="200ms" Feb 13 15:09:03.457432 kubelet[2831]: I0213 15:09:03.457339 2831 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:09:03.457697 kubelet[2831]: I0213 15:09:03.457650 2831 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:09:03.460280 kubelet[2831]: E0213 15:09:03.460217 2831 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:09:03.463775 kubelet[2831]: I0213 15:09:03.463729 2831 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:09:03.492337 kubelet[2831]: I0213 15:09:03.492272 2831 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:09:03.494632 kubelet[2831]: I0213 15:09:03.494590 2831 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:09:03.494819 kubelet[2831]: I0213 15:09:03.494797 2831 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:09:03.495465 kubelet[2831]: I0213 15:09:03.494917 2831 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:09:03.495465 kubelet[2831]: E0213 15:09:03.495045 2831 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:09:03.505548 kubelet[2831]: W0213 15:09:03.505470 2831 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.142:6443: connect: connection refused Feb 13 15:09:03.505695 kubelet[2831]: E0213 15:09:03.505556 2831 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.142:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:09:03.509870 kubelet[2831]: I0213 15:09:03.509821 2831 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:09:03.509870 kubelet[2831]: I0213 15:09:03.509858 2831 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:09:03.510055 kubelet[2831]: I0213 15:09:03.509890 2831 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:09:03.541251 kubelet[2831]: I0213 15:09:03.541207 2831 policy_none.go:49] "None policy: Start" Feb 13 15:09:03.543252 kubelet[2831]: I0213 15:09:03.543193 2831 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:09:03.543252 kubelet[2831]: I0213 15:09:03.543259 2831 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:09:03.551359 kubelet[2831]: E0213 15:09:03.551272 2831 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-30-142\" not found" Feb 13 15:09:03.557377 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:09:03.573503 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:09:03.581979 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:09:03.592161 kubelet[2831]: I0213 15:09:03.591842 2831 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:09:03.593136 kubelet[2831]: I0213 15:09:03.592176 2831 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:09:03.593136 kubelet[2831]: I0213 15:09:03.592202 2831 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:09:03.595437 kubelet[2831]: I0213 15:09:03.594074 2831 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:09:03.599453 kubelet[2831]: E0213 15:09:03.599344 2831 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-142\" not found" Feb 13 15:09:03.623440 systemd[1]: Created slice kubepods-burstable-pod4af438c1935ce8176a1ecb1866339ad2.slice - libcontainer container kubepods-burstable-pod4af438c1935ce8176a1ecb1866339ad2.slice. Feb 13 15:09:03.640349 systemd[1]: Created slice kubepods-burstable-pod4641bb9e24ec041ca83277ac572695d0.slice - libcontainer container kubepods-burstable-pod4641bb9e24ec041ca83277ac572695d0.slice. Feb 13 15:09:03.657155 systemd[1]: Created slice kubepods-burstable-pod7a743e145509ac3f0394193cae36038e.slice - libcontainer container kubepods-burstable-pod7a743e145509ac3f0394193cae36038e.slice. Feb 13 15:09:03.658981 kubelet[2831]: E0213 15:09:03.658886 2831 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-142?timeout=10s\": dial tcp 172.31.30.142:6443: connect: connection refused" interval="400ms" Feb 13 15:09:03.695349 kubelet[2831]: I0213 15:09:03.695289 2831 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-142" Feb 13 15:09:03.695919 kubelet[2831]: E0213 15:09:03.695880 2831 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.142:6443/api/v1/nodes\": dial tcp 172.31.30.142:6443: connect: connection refused" node="ip-172-31-30-142" Feb 13 15:09:03.752550 kubelet[2831]: I0213 15:09:03.752496 2831 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4af438c1935ce8176a1ecb1866339ad2-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-142\" (UID: \"4af438c1935ce8176a1ecb1866339ad2\") " pod="kube-system/kube-apiserver-ip-172-31-30-142" Feb 13 15:09:03.752642 kubelet[2831]: I0213 15:09:03.752560 2831 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4641bb9e24ec041ca83277ac572695d0-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-142\" (UID: \"4641bb9e24ec041ca83277ac572695d0\") " pod="kube-system/kube-controller-manager-ip-172-31-30-142" Feb 13 15:09:03.752642 kubelet[2831]: I0213 15:09:03.752599 2831 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4641bb9e24ec041ca83277ac572695d0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-142\" (UID: \"4641bb9e24ec041ca83277ac572695d0\") " pod="kube-system/kube-controller-manager-ip-172-31-30-142" Feb 13 15:09:03.752817 kubelet[2831]: I0213 15:09:03.752638 2831 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4641bb9e24ec041ca83277ac572695d0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-142\" (UID: \"4641bb9e24ec041ca83277ac572695d0\") " pod="kube-system/kube-controller-manager-ip-172-31-30-142" Feb 13 15:09:03.752817 kubelet[2831]: I0213 15:09:03.752681 2831 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a743e145509ac3f0394193cae36038e-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-142\" (UID: \"7a743e145509ac3f0394193cae36038e\") " pod="kube-system/kube-scheduler-ip-172-31-30-142" Feb 13 15:09:03.752817 kubelet[2831]: I0213 15:09:03.752717 2831 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4af438c1935ce8176a1ecb1866339ad2-ca-certs\") pod \"kube-apiserver-ip-172-31-30-142\" (UID: \"4af438c1935ce8176a1ecb1866339ad2\") " pod="kube-system/kube-apiserver-ip-172-31-30-142" Feb 13 15:09:03.752817 kubelet[2831]: I0213 15:09:03.752757 2831 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4641bb9e24ec041ca83277ac572695d0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-142\" (UID: \"4641bb9e24ec041ca83277ac572695d0\") " pod="kube-system/kube-controller-manager-ip-172-31-30-142" Feb 13 15:09:03.752817 kubelet[2831]: I0213 15:09:03.752799 2831 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4641bb9e24ec041ca83277ac572695d0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-142\" (UID: \"4641bb9e24ec041ca83277ac572695d0\") " pod="kube-system/kube-controller-manager-ip-172-31-30-142" Feb 13 15:09:03.753111 kubelet[2831]: I0213 15:09:03.752843 2831 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4af438c1935ce8176a1ecb1866339ad2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-142\" (UID: \"4af438c1935ce8176a1ecb1866339ad2\") " pod="kube-system/kube-apiserver-ip-172-31-30-142" Feb 13 15:09:03.898431 kubelet[2831]: I0213 15:09:03.898294 2831 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-142" Feb 13 15:09:03.899703 kubelet[2831]: E0213 15:09:03.899629 2831 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.142:6443/api/v1/nodes\": dial tcp 172.31.30.142:6443: connect: connection refused" node="ip-172-31-30-142" Feb 13 15:09:03.935410 containerd[1955]: time="2025-02-13T15:09:03.935321532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-142,Uid:4af438c1935ce8176a1ecb1866339ad2,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:03.954486 containerd[1955]: time="2025-02-13T15:09:03.954412056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-142,Uid:4641bb9e24ec041ca83277ac572695d0,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:03.963075 containerd[1955]: time="2025-02-13T15:09:03.962967121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-142,Uid:7a743e145509ac3f0394193cae36038e,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:04.060734 kubelet[2831]: E0213 15:09:04.060639 2831 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-142?timeout=10s\": dial tcp 172.31.30.142:6443: connect: connection refused" interval="800ms" Feb 13 15:09:04.303235 kubelet[2831]: I0213 15:09:04.303134 2831 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-142" Feb 13 15:09:04.303772 kubelet[2831]: E0213 15:09:04.303714 2831 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.142:6443/api/v1/nodes\": dial tcp 172.31.30.142:6443: connect: connection refused" node="ip-172-31-30-142" Feb 13 15:09:04.320604 kubelet[2831]: W0213 15:09:04.320489 2831 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.142:6443: connect: connection refused Feb 13 15:09:04.320724 kubelet[2831]: E0213 15:09:04.320599 2831 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.142:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:09:04.448287 kubelet[2831]: W0213 15:09:04.448132 2831 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-142&limit=500&resourceVersion=0": dial tcp 172.31.30.142:6443: connect: connection refused Feb 13 15:09:04.449245 kubelet[2831]: E0213 15:09:04.448306 2831 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-142&limit=500&resourceVersion=0\": dial tcp 172.31.30.142:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:09:04.811005 kubelet[2831]: W0213 15:09:04.810897 2831 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.142:6443: connect: connection refused Feb 13 15:09:04.811265 kubelet[2831]: E0213 15:09:04.811024 2831 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.142:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:09:04.861487 kubelet[2831]: E0213 15:09:04.861406 2831 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-142?timeout=10s\": dial tcp 172.31.30.142:6443: connect: connection refused" interval="1.6s" Feb 13 15:09:04.907783 kubelet[2831]: W0213 15:09:04.907660 2831 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.142:6443: connect: connection refused Feb 13 15:09:04.907783 kubelet[2831]: E0213 15:09:04.907730 2831 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.142:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:09:05.107100 kubelet[2831]: I0213 15:09:05.106443 2831 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-142" Feb 13 15:09:05.107100 kubelet[2831]: E0213 15:09:05.106895 2831 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.30.142:6443/api/v1/nodes\": dial tcp 172.31.30.142:6443: connect: connection refused" node="ip-172-31-30-142" Feb 13 15:09:05.405687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1586371518.mount: Deactivated successfully. Feb 13 15:09:05.417039 containerd[1955]: time="2025-02-13T15:09:05.416664216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:09:05.423528 containerd[1955]: time="2025-02-13T15:09:05.423446316Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 15:09:05.424956 containerd[1955]: time="2025-02-13T15:09:05.424877448Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:09:05.426625 containerd[1955]: time="2025-02-13T15:09:05.426544812Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:09:05.430350 containerd[1955]: time="2025-02-13T15:09:05.430279524Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:09:05.442832 containerd[1955]: time="2025-02-13T15:09:05.441702552Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:09:05.442832 containerd[1955]: time="2025-02-13T15:09:05.441780228Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:09:05.448552 containerd[1955]: time="2025-02-13T15:09:05.448481160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:09:05.450378 containerd[1955]: time="2025-02-13T15:09:05.450332388Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.495798292s" Feb 13 15:09:05.453862 containerd[1955]: time="2025-02-13T15:09:05.453789024Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.518309512s" Feb 13 15:09:05.467891 containerd[1955]: time="2025-02-13T15:09:05.467787300Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.504707067s" Feb 13 15:09:05.568999 kubelet[2831]: E0213 15:09:05.568450 2831 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.30.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.142:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:09:05.730676 containerd[1955]: time="2025-02-13T15:09:05.729929125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:09:05.731161 containerd[1955]: time="2025-02-13T15:09:05.730695469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:09:05.731161 containerd[1955]: time="2025-02-13T15:09:05.730744753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:05.731161 containerd[1955]: time="2025-02-13T15:09:05.730901425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:05.735794 containerd[1955]: time="2025-02-13T15:09:05.735300361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:09:05.735987 containerd[1955]: time="2025-02-13T15:09:05.735795973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:09:05.736055 containerd[1955]: time="2025-02-13T15:09:05.735902593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:05.738048 containerd[1955]: time="2025-02-13T15:09:05.736195669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:05.742285 containerd[1955]: time="2025-02-13T15:09:05.741219481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:09:05.742285 containerd[1955]: time="2025-02-13T15:09:05.741325813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:09:05.742285 containerd[1955]: time="2025-02-13T15:09:05.741363349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:05.743156 containerd[1955]: time="2025-02-13T15:09:05.742982893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:05.785273 systemd[1]: Started cri-containerd-bcf1359f7bc7ecd46f9d831f8205e296834edf094e604d41f4f8233564cc7e15.scope - libcontainer container bcf1359f7bc7ecd46f9d831f8205e296834edf094e604d41f4f8233564cc7e15. Feb 13 15:09:05.794186 systemd[1]: Started cri-containerd-cfeaed686ce061f69277d488d571ec226a0a3851d46ff54239a1a488ebba2fe6.scope - libcontainer container cfeaed686ce061f69277d488d571ec226a0a3851d46ff54239a1a488ebba2fe6. Feb 13 15:09:05.814066 systemd[1]: Started cri-containerd-b8065025a7953abd460267771ea4e0bf1d55eaa556b999e5b458ad6e0c87a45e.scope - libcontainer container b8065025a7953abd460267771ea4e0bf1d55eaa556b999e5b458ad6e0c87a45e. Feb 13 15:09:05.905906 containerd[1955]: time="2025-02-13T15:09:05.905817110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-142,Uid:4641bb9e24ec041ca83277ac572695d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcf1359f7bc7ecd46f9d831f8205e296834edf094e604d41f4f8233564cc7e15\"" Feb 13 15:09:05.921356 containerd[1955]: time="2025-02-13T15:09:05.921232706Z" level=info msg="CreateContainer within sandbox \"bcf1359f7bc7ecd46f9d831f8205e296834edf094e604d41f4f8233564cc7e15\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:09:05.937471 containerd[1955]: time="2025-02-13T15:09:05.937268234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-142,Uid:4af438c1935ce8176a1ecb1866339ad2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8065025a7953abd460267771ea4e0bf1d55eaa556b999e5b458ad6e0c87a45e\"" Feb 13 15:09:05.946971 kubelet[2831]: W0213 15:09:05.946292 2831 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.142:6443: connect: connection refused Feb 13 15:09:05.946971 kubelet[2831]: E0213 15:09:05.946360 2831 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.142:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:09:05.950244 containerd[1955]: time="2025-02-13T15:09:05.950144318Z" level=info msg="CreateContainer within sandbox \"b8065025a7953abd460267771ea4e0bf1d55eaa556b999e5b458ad6e0c87a45e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:09:05.958202 containerd[1955]: time="2025-02-13T15:09:05.958138634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-142,Uid:7a743e145509ac3f0394193cae36038e,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfeaed686ce061f69277d488d571ec226a0a3851d46ff54239a1a488ebba2fe6\"" Feb 13 15:09:05.971378 containerd[1955]: time="2025-02-13T15:09:05.971156091Z" level=info msg="CreateContainer within sandbox \"cfeaed686ce061f69277d488d571ec226a0a3851d46ff54239a1a488ebba2fe6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:09:05.976600 containerd[1955]: time="2025-02-13T15:09:05.976547175Z" level=info msg="CreateContainer within sandbox \"bcf1359f7bc7ecd46f9d831f8205e296834edf094e604d41f4f8233564cc7e15\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"97fb93bd37e5488b24ee663e9a6cf330ee327b9e91bf87adf40c16c8583712bf\"" Feb 13 15:09:05.980913 containerd[1955]: time="2025-02-13T15:09:05.980643147Z" level=info msg="StartContainer for \"97fb93bd37e5488b24ee663e9a6cf330ee327b9e91bf87adf40c16c8583712bf\"" Feb 13 15:09:06.000016 containerd[1955]: time="2025-02-13T15:09:05.999926787Z" level=info msg="CreateContainer within sandbox \"b8065025a7953abd460267771ea4e0bf1d55eaa556b999e5b458ad6e0c87a45e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b66e1ac6895e865ea497fabf99bd40aebd3098b961982dc46fb0ecc1c2776814\"" Feb 13 15:09:06.001997 containerd[1955]: time="2025-02-13T15:09:06.001900619Z" level=info msg="StartContainer for \"b66e1ac6895e865ea497fabf99bd40aebd3098b961982dc46fb0ecc1c2776814\"" Feb 13 15:09:06.012161 containerd[1955]: time="2025-02-13T15:09:06.011520311Z" level=info msg="CreateContainer within sandbox \"cfeaed686ce061f69277d488d571ec226a0a3851d46ff54239a1a488ebba2fe6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7f2895107de77feaa0375b08b5eed40e9758c2ab144328c38ceba900bf75f647\"" Feb 13 15:09:06.013367 containerd[1955]: time="2025-02-13T15:09:06.013273163Z" level=info msg="StartContainer for \"7f2895107de77feaa0375b08b5eed40e9758c2ab144328c38ceba900bf75f647\"" Feb 13 15:09:06.043309 systemd[1]: Started cri-containerd-97fb93bd37e5488b24ee663e9a6cf330ee327b9e91bf87adf40c16c8583712bf.scope - libcontainer container 97fb93bd37e5488b24ee663e9a6cf330ee327b9e91bf87adf40c16c8583712bf. Feb 13 15:09:06.090380 systemd[1]: Started cri-containerd-7f2895107de77feaa0375b08b5eed40e9758c2ab144328c38ceba900bf75f647.scope - libcontainer container 7f2895107de77feaa0375b08b5eed40e9758c2ab144328c38ceba900bf75f647. Feb 13 15:09:06.109094 systemd[1]: Started cri-containerd-b66e1ac6895e865ea497fabf99bd40aebd3098b961982dc46fb0ecc1c2776814.scope - libcontainer container b66e1ac6895e865ea497fabf99bd40aebd3098b961982dc46fb0ecc1c2776814. Feb 13 15:09:06.179978 containerd[1955]: time="2025-02-13T15:09:06.176610492Z" level=info msg="StartContainer for \"97fb93bd37e5488b24ee663e9a6cf330ee327b9e91bf87adf40c16c8583712bf\" returns successfully" Feb 13 15:09:06.214076 containerd[1955]: time="2025-02-13T15:09:06.213837264Z" level=info msg="StartContainer for \"7f2895107de77feaa0375b08b5eed40e9758c2ab144328c38ceba900bf75f647\" returns successfully" Feb 13 15:09:06.262283 containerd[1955]: time="2025-02-13T15:09:06.261183948Z" level=info msg="StartContainer for \"b66e1ac6895e865ea497fabf99bd40aebd3098b961982dc46fb0ecc1c2776814\" returns successfully" Feb 13 15:09:06.757777 kubelet[2831]: I0213 15:09:06.757544 2831 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-142" Feb 13 15:09:09.349272 kubelet[2831]: E0213 15:09:09.349203 2831 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-142\" not found" node="ip-172-31-30-142" Feb 13 15:09:09.436550 kubelet[2831]: I0213 15:09:09.436504 2831 apiserver.go:52] "Watching apiserver" Feb 13 15:09:09.452295 kubelet[2831]: I0213 15:09:09.451313 2831 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:09:09.476763 kubelet[2831]: I0213 15:09:09.476442 2831 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-30-142" Feb 13 15:09:09.476763 kubelet[2831]: E0213 15:09:09.476495 2831 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-30-142\": node \"ip-172-31-30-142\" not found" Feb 13 15:09:11.627438 systemd[1]: Reload requested from client PID 3105 ('systemctl') (unit session-7.scope)... Feb 13 15:09:11.627883 systemd[1]: Reloading... Feb 13 15:09:11.851001 zram_generator::config[3157]: No configuration found. Feb 13 15:09:12.083517 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:09:12.338362 systemd[1]: Reloading finished in 709 ms. Feb 13 15:09:12.382201 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:09:12.397696 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:09:12.399077 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:09:12.399178 systemd[1]: kubelet.service: Consumed 2.478s CPU time, 116M memory peak. Feb 13 15:09:12.407484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:09:13.135220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:09:13.148559 (kubelet)[3210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:09:13.253081 kubelet[3210]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:09:13.253081 kubelet[3210]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:09:13.253572 kubelet[3210]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:09:13.253572 kubelet[3210]: I0213 15:09:13.253236 3210 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:09:13.271478 kubelet[3210]: I0213 15:09:13.270506 3210 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:09:13.271478 kubelet[3210]: I0213 15:09:13.270554 3210 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:09:13.279115 kubelet[3210]: I0213 15:09:13.271763 3210 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:09:13.279115 kubelet[3210]: I0213 15:09:13.278217 3210 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:09:13.287413 kubelet[3210]: I0213 15:09:13.287110 3210 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:09:13.301156 kubelet[3210]: E0213 15:09:13.301014 3210 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:09:13.301156 kubelet[3210]: I0213 15:09:13.301140 3210 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:09:13.308351 kubelet[3210]: I0213 15:09:13.308163 3210 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:09:13.308517 kubelet[3210]: I0213 15:09:13.308371 3210 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:09:13.309253 kubelet[3210]: I0213 15:09:13.308583 3210 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:09:13.309253 kubelet[3210]: I0213 15:09:13.308635 3210 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-142","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:09:13.309253 kubelet[3210]: I0213 15:09:13.308931 3210 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:09:13.309253 kubelet[3210]: I0213 15:09:13.308998 3210 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:09:13.311681 kubelet[3210]: I0213 15:09:13.309059 3210 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:09:13.311681 kubelet[3210]: I0213 15:09:13.309263 3210 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:09:13.311681 kubelet[3210]: I0213 15:09:13.309308 3210 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:09:13.311681 kubelet[3210]: I0213 15:09:13.309397 3210 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:09:13.311681 kubelet[3210]: I0213 15:09:13.309433 3210 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:09:13.314002 kubelet[3210]: I0213 15:09:13.313717 3210 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:09:13.315413 kubelet[3210]: I0213 15:09:13.315378 3210 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:09:13.317390 kubelet[3210]: I0213 15:09:13.317349 3210 server.go:1269] "Started kubelet" Feb 13 15:09:13.322105 kubelet[3210]: I0213 15:09:13.321849 3210 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:09:13.323394 sudo[3224]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:09:13.324098 sudo[3224]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:09:13.345221 kubelet[3210]: I0213 15:09:13.344128 3210 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:09:13.351840 kubelet[3210]: I0213 15:09:13.351792 3210 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:09:13.358003 kubelet[3210]: I0213 15:09:13.355890 3210 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:09:13.358478 kubelet[3210]: I0213 15:09:13.358454 3210 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:09:13.360381 kubelet[3210]: I0213 15:09:13.360345 3210 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:09:13.374089 kubelet[3210]: I0213 15:09:13.373747 3210 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:09:13.374596 kubelet[3210]: E0213 15:09:13.374549 3210 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-30-142\" not found" Feb 13 15:09:13.375980 kubelet[3210]: I0213 15:09:13.375605 3210 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:09:13.378778 kubelet[3210]: I0213 15:09:13.378287 3210 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:09:13.391500 kubelet[3210]: I0213 15:09:13.391027 3210 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:09:13.418628 kubelet[3210]: I0213 15:09:13.418531 3210 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:09:13.422366 kubelet[3210]: I0213 15:09:13.421548 3210 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:09:13.422366 kubelet[3210]: I0213 15:09:13.421593 3210 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:09:13.422366 kubelet[3210]: I0213 15:09:13.421627 3210 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:09:13.422366 kubelet[3210]: E0213 15:09:13.421736 3210 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:09:13.443039 kubelet[3210]: I0213 15:09:13.442122 3210 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:09:13.443039 kubelet[3210]: I0213 15:09:13.442191 3210 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:09:13.469306 kubelet[3210]: E0213 15:09:13.469262 3210 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:09:13.527543 kubelet[3210]: E0213 15:09:13.527313 3210 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:09:13.604302 kubelet[3210]: I0213 15:09:13.604266 3210 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:09:13.604505 kubelet[3210]: I0213 15:09:13.604472 3210 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:09:13.604613 kubelet[3210]: I0213 15:09:13.604594 3210 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:09:13.606416 kubelet[3210]: I0213 15:09:13.605335 3210 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:09:13.606416 kubelet[3210]: I0213 15:09:13.605375 3210 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:09:13.606416 kubelet[3210]: I0213 15:09:13.605419 3210 policy_none.go:49] "None policy: Start" Feb 13 15:09:13.609588 kubelet[3210]: I0213 15:09:13.609544 3210 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:09:13.609588 kubelet[3210]: I0213 15:09:13.609596 3210 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:09:13.609979 kubelet[3210]: I0213 15:09:13.609911 3210 state_mem.go:75] "Updated machine memory state" Feb 13 15:09:13.633046 kubelet[3210]: I0213 15:09:13.632851 3210 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:09:13.635323 kubelet[3210]: I0213 15:09:13.635272 3210 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:09:13.635323 kubelet[3210]: I0213 15:09:13.635321 3210 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:09:13.635323 kubelet[3210]: I0213 15:09:13.636090 3210 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:09:13.749618 kubelet[3210]: E0213 15:09:13.749393 3210 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-142\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-142" Feb 13 15:09:13.774782 kubelet[3210]: I0213 15:09:13.774748 3210 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-30-142" Feb 13 15:09:13.783260 kubelet[3210]: I0213 15:09:13.782821 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4af438c1935ce8176a1ecb1866339ad2-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-142\" (UID: \"4af438c1935ce8176a1ecb1866339ad2\") " pod="kube-system/kube-apiserver-ip-172-31-30-142" Feb 13 15:09:13.783260 kubelet[3210]: I0213 15:09:13.782879 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4641bb9e24ec041ca83277ac572695d0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-142\" (UID: \"4641bb9e24ec041ca83277ac572695d0\") " pod="kube-system/kube-controller-manager-ip-172-31-30-142" Feb 13 15:09:13.783978 kubelet[3210]: I0213 15:09:13.782930 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4641bb9e24ec041ca83277ac572695d0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-142\" (UID: \"4641bb9e24ec041ca83277ac572695d0\") " pod="kube-system/kube-controller-manager-ip-172-31-30-142" Feb 13 15:09:13.783978 kubelet[3210]: I0213 15:09:13.783842 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a743e145509ac3f0394193cae36038e-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-142\" (UID: \"7a743e145509ac3f0394193cae36038e\") " pod="kube-system/kube-scheduler-ip-172-31-30-142" Feb 13 15:09:13.785150 kubelet[3210]: I0213 15:09:13.784580 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4af438c1935ce8176a1ecb1866339ad2-ca-certs\") pod \"kube-apiserver-ip-172-31-30-142\" (UID: \"4af438c1935ce8176a1ecb1866339ad2\") " pod="kube-system/kube-apiserver-ip-172-31-30-142" Feb 13 15:09:13.785150 kubelet[3210]: I0213 15:09:13.784925 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4af438c1935ce8176a1ecb1866339ad2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-142\" (UID: \"4af438c1935ce8176a1ecb1866339ad2\") " pod="kube-system/kube-apiserver-ip-172-31-30-142" Feb 13 15:09:13.785704 kubelet[3210]: I0213 15:09:13.785513 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4641bb9e24ec041ca83277ac572695d0-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-142\" (UID: \"4641bb9e24ec041ca83277ac572695d0\") " pod="kube-system/kube-controller-manager-ip-172-31-30-142" Feb 13 15:09:13.786737 kubelet[3210]: I0213 15:09:13.786315 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4641bb9e24ec041ca83277ac572695d0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-142\" (UID: \"4641bb9e24ec041ca83277ac572695d0\") " pod="kube-system/kube-controller-manager-ip-172-31-30-142" Feb 13 15:09:13.786737 kubelet[3210]: I0213 15:09:13.786371 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4641bb9e24ec041ca83277ac572695d0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-142\" (UID: \"4641bb9e24ec041ca83277ac572695d0\") " pod="kube-system/kube-controller-manager-ip-172-31-30-142" Feb 13 15:09:13.801172 kubelet[3210]: I0213 15:09:13.799168 3210 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-30-142" Feb 13 15:09:13.801172 kubelet[3210]: I0213 15:09:13.800126 3210 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-30-142" Feb 13 15:09:14.298765 sudo[3224]: pam_unix(sudo:session): session closed for user root Feb 13 15:09:14.311038 kubelet[3210]: I0213 15:09:14.310840 3210 apiserver.go:52] "Watching apiserver" Feb 13 15:09:14.379043 kubelet[3210]: I0213 15:09:14.378923 3210 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:09:14.521601 kubelet[3210]: E0213 15:09:14.521311 3210 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-142\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-142" Feb 13 15:09:14.560808 kubelet[3210]: I0213 15:09:14.560631 3210 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-142" podStartSLOduration=1.560607333 podStartE2EDuration="1.560607333s" podCreationTimestamp="2025-02-13 15:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:09:14.545756529 +0000 UTC m=+1.390872512" watchObservedRunningTime="2025-02-13 15:09:14.560607333 +0000 UTC m=+1.405723304" Feb 13 15:09:14.580475 kubelet[3210]: I0213 15:09:14.580380 3210 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-142" podStartSLOduration=1.5803434090000001 podStartE2EDuration="1.580343409s" podCreationTimestamp="2025-02-13 15:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:09:14.562609569 +0000 UTC m=+1.407725564" watchObservedRunningTime="2025-02-13 15:09:14.580343409 +0000 UTC m=+1.425459380" Feb 13 15:09:14.602606 kubelet[3210]: I0213 15:09:14.602283 3210 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-142" podStartSLOduration=3.602259405 podStartE2EDuration="3.602259405s" podCreationTimestamp="2025-02-13 15:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:09:14.580991157 +0000 UTC m=+1.426107128" watchObservedRunningTime="2025-02-13 15:09:14.602259405 +0000 UTC m=+1.447375376" Feb 13 15:09:14.973099 update_engine[1932]: I20250213 15:09:14.972999 1932 update_attempter.cc:509] Updating boot flags... Feb 13 15:09:15.109068 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3268) Feb 13 15:09:15.764019 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3271) Feb 13 15:09:17.536231 kubelet[3210]: I0213 15:09:17.536156 3210 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:09:17.538086 containerd[1955]: time="2025-02-13T15:09:17.538022604Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:09:17.540279 kubelet[3210]: I0213 15:09:17.539083 3210 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:09:17.756265 sudo[2276]: pam_unix(sudo:session): session closed for user root Feb 13 15:09:17.779004 sshd[2275]: Connection closed by 139.178.68.195 port 50824 Feb 13 15:09:17.779891 sshd-session[2273]: pam_unix(sshd:session): session closed for user core Feb 13 15:09:17.787830 systemd[1]: sshd@6-172.31.30.142:22-139.178.68.195:50824.service: Deactivated successfully. Feb 13 15:09:17.794658 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:09:17.796134 systemd[1]: session-7.scope: Consumed 9.275s CPU time, 260.3M memory peak. Feb 13 15:09:17.800862 systemd-logind[1931]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:09:17.804698 systemd-logind[1931]: Removed session 7. Feb 13 15:09:18.350980 systemd[1]: Created slice kubepods-besteffort-podb2a8286c_9325_422a_a719_d13aad519086.slice - libcontainer container kubepods-besteffort-podb2a8286c_9325_422a_a719_d13aad519086.slice. Feb 13 15:09:18.411395 systemd[1]: Created slice kubepods-burstable-pod368bec3b_1909_4277_a6d5_89daa02ed593.slice - libcontainer container kubepods-burstable-pod368bec3b_1909_4277_a6d5_89daa02ed593.slice. Feb 13 15:09:18.432856 kubelet[3210]: I0213 15:09:18.432081 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-xtables-lock\") pod \"cilium-hgrrg\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " pod="kube-system/cilium-hgrrg" Feb 13 15:09:18.432856 kubelet[3210]: I0213 15:09:18.432153 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sws4h\" (UniqueName: \"kubernetes.io/projected/b2a8286c-9325-422a-a719-d13aad519086-kube-api-access-sws4h\") pod \"kube-proxy-ld4dr\" (UID: \"b2a8286c-9325-422a-a719-d13aad519086\") " pod="kube-system/kube-proxy-ld4dr" Feb 13 15:09:18.432856 kubelet[3210]: I0213 15:09:18.432198 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-hostproc\") pod \"cilium-hgrrg\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " pod="kube-system/cilium-hgrrg" Feb 13 15:09:18.432856 kubelet[3210]: I0213 15:09:18.432233 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/368bec3b-1909-4277-a6d5-89daa02ed593-clustermesh-secrets\") pod \"cilium-hgrrg\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " pod="kube-system/cilium-hgrrg" Feb 13 15:09:18.432856 kubelet[3210]: I0213 15:09:18.432269 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/368bec3b-1909-4277-a6d5-89daa02ed593-cilium-config-path\") pod \"cilium-hgrrg\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " pod="kube-system/cilium-hgrrg" Feb 13 15:09:18.433314 kubelet[3210]: I0213 15:09:18.432307 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-host-proc-sys-net\") pod \"cilium-hgrrg\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " pod="kube-system/cilium-hgrrg" Feb 13 15:09:18.433314 kubelet[3210]: I0213 15:09:18.432341 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/368bec3b-1909-4277-a6d5-89daa02ed593-hubble-tls\") pod \"cilium-hgrrg\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " pod="kube-system/cilium-hgrrg" Feb 13 15:09:18.433314 kubelet[3210]: I0213 15:09:18.432388 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2a8286c-9325-422a-a719-d13aad519086-xtables-lock\") pod \"kube-proxy-ld4dr\" (UID: \"b2a8286c-9325-422a-a719-d13aad519086\") " pod="kube-system/kube-proxy-ld4dr" Feb 13 15:09:18.433314 kubelet[3210]: I0213 15:09:18.432427 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-lib-modules\") pod \"cilium-hgrrg\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " pod="kube-system/cilium-hgrrg" Feb 13 15:09:18.433314 kubelet[3210]: I0213 15:09:18.432463 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58f22\" (UniqueName: \"kubernetes.io/projected/368bec3b-1909-4277-a6d5-89daa02ed593-kube-api-access-58f22\") pod \"cilium-hgrrg\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " pod="kube-system/cilium-hgrrg" Feb 13 15:09:18.433314 kubelet[3210]: I0213 15:09:18.432498 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-etc-cni-netd\") pod \"cilium-hgrrg\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " pod="kube-system/cilium-hgrrg" Feb 13 15:09:18.433599 kubelet[3210]: I0213 15:09:18.432557 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-cilium-run\") pod \"cilium-hgrrg\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " pod="kube-system/cilium-hgrrg" Feb 13 15:09:18.433599 kubelet[3210]: I0213 15:09:18.432595 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b2a8286c-9325-422a-a719-d13aad519086-kube-proxy\") pod \"kube-proxy-ld4dr\" (UID: \"b2a8286c-9325-422a-a719-d13aad519086\") " pod="kube-system/kube-proxy-ld4dr" Feb 13 15:09:18.433599 kubelet[3210]: I0213 15:09:18.432632 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-cilium-cgroup\") pod \"cilium-hgrrg\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " pod="kube-system/cilium-hgrrg" Feb 13 15:09:18.433599 kubelet[3210]: I0213 15:09:18.432667 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-cni-path\") pod \"cilium-hgrrg\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " pod="kube-system/cilium-hgrrg" Feb 13 15:09:18.433599 kubelet[3210]: I0213 15:09:18.432701 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-host-proc-sys-kernel\") pod \"cilium-hgrrg\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " pod="kube-system/cilium-hgrrg" Feb 13 15:09:18.433599 kubelet[3210]: I0213 15:09:18.432733 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-bpf-maps\") pod \"cilium-hgrrg\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " pod="kube-system/cilium-hgrrg" Feb 13 15:09:18.435034 kubelet[3210]: I0213 15:09:18.432772 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2a8286c-9325-422a-a719-d13aad519086-lib-modules\") pod \"kube-proxy-ld4dr\" (UID: \"b2a8286c-9325-422a-a719-d13aad519086\") " pod="kube-system/kube-proxy-ld4dr" Feb 13 15:09:18.669678 containerd[1955]: time="2025-02-13T15:09:18.669415586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ld4dr,Uid:b2a8286c-9325-422a-a719-d13aad519086,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:18.727038 containerd[1955]: time="2025-02-13T15:09:18.725598662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hgrrg,Uid:368bec3b-1909-4277-a6d5-89daa02ed593,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:18.761187 containerd[1955]: time="2025-02-13T15:09:18.759622514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:09:18.761187 containerd[1955]: time="2025-02-13T15:09:18.759743906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:09:18.761187 containerd[1955]: time="2025-02-13T15:09:18.759780950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:18.773059 containerd[1955]: time="2025-02-13T15:09:18.770775878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:18.786630 systemd[1]: Created slice kubepods-besteffort-podb40c1a18_6a44_4b15_8ecc_b6cba91f498e.slice - libcontainer container kubepods-besteffort-podb40c1a18_6a44_4b15_8ecc_b6cba91f498e.slice. Feb 13 15:09:18.839854 kubelet[3210]: I0213 15:09:18.835425 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptxp6\" (UniqueName: \"kubernetes.io/projected/b40c1a18-6a44-4b15-8ecc-b6cba91f498e-kube-api-access-ptxp6\") pod \"cilium-operator-5d85765b45-6mbbr\" (UID: \"b40c1a18-6a44-4b15-8ecc-b6cba91f498e\") " pod="kube-system/cilium-operator-5d85765b45-6mbbr" Feb 13 15:09:18.839854 kubelet[3210]: I0213 15:09:18.835507 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b40c1a18-6a44-4b15-8ecc-b6cba91f498e-cilium-config-path\") pod \"cilium-operator-5d85765b45-6mbbr\" (UID: \"b40c1a18-6a44-4b15-8ecc-b6cba91f498e\") " pod="kube-system/cilium-operator-5d85765b45-6mbbr" Feb 13 15:09:18.867019 containerd[1955]: time="2025-02-13T15:09:18.864856227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:09:18.867019 containerd[1955]: time="2025-02-13T15:09:18.865037343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:09:18.867019 containerd[1955]: time="2025-02-13T15:09:18.865075371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:18.867019 containerd[1955]: time="2025-02-13T15:09:18.865236831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:18.887506 systemd[1]: Started cri-containerd-b7c631b2cdfbefd4f2e4922b222ea973db306ad32de0978cdfe295edf72e0e12.scope - libcontainer container b7c631b2cdfbefd4f2e4922b222ea973db306ad32de0978cdfe295edf72e0e12. Feb 13 15:09:18.963731 systemd[1]: Started cri-containerd-f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa.scope - libcontainer container f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa. Feb 13 15:09:19.061160 containerd[1955]: time="2025-02-13T15:09:19.061088700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ld4dr,Uid:b2a8286c-9325-422a-a719-d13aad519086,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7c631b2cdfbefd4f2e4922b222ea973db306ad32de0978cdfe295edf72e0e12\"" Feb 13 15:09:19.068839 containerd[1955]: time="2025-02-13T15:09:19.068592576Z" level=info msg="CreateContainer within sandbox \"b7c631b2cdfbefd4f2e4922b222ea973db306ad32de0978cdfe295edf72e0e12\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:09:19.079465 containerd[1955]: time="2025-02-13T15:09:19.079351272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hgrrg,Uid:368bec3b-1909-4277-a6d5-89daa02ed593,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\"" Feb 13 15:09:19.088805 containerd[1955]: time="2025-02-13T15:09:19.087432288Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:09:19.114389 containerd[1955]: time="2025-02-13T15:09:19.114315672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6mbbr,Uid:b40c1a18-6a44-4b15-8ecc-b6cba91f498e,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:19.133637 containerd[1955]: time="2025-02-13T15:09:19.133573104Z" level=info msg="CreateContainer within sandbox \"b7c631b2cdfbefd4f2e4922b222ea973db306ad32de0978cdfe295edf72e0e12\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5f8d625dd65f3b307441ed352c0b4a3f64c4de657e4de43629d57825ddedddf5\"" Feb 13 15:09:19.135100 containerd[1955]: time="2025-02-13T15:09:19.134881608Z" level=info msg="StartContainer for \"5f8d625dd65f3b307441ed352c0b4a3f64c4de657e4de43629d57825ddedddf5\"" Feb 13 15:09:19.188062 containerd[1955]: time="2025-02-13T15:09:19.187674852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:09:19.188062 containerd[1955]: time="2025-02-13T15:09:19.187799196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:09:19.188062 containerd[1955]: time="2025-02-13T15:09:19.187836792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:19.188523 containerd[1955]: time="2025-02-13T15:09:19.188053512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:19.220081 systemd[1]: Started cri-containerd-5f8d625dd65f3b307441ed352c0b4a3f64c4de657e4de43629d57825ddedddf5.scope - libcontainer container 5f8d625dd65f3b307441ed352c0b4a3f64c4de657e4de43629d57825ddedddf5. Feb 13 15:09:19.238682 systemd[1]: Started cri-containerd-1b66c51ce28b4b25f01d3c6a6916daa128893904596aa004d013a88182c2ddd7.scope - libcontainer container 1b66c51ce28b4b25f01d3c6a6916daa128893904596aa004d013a88182c2ddd7. Feb 13 15:09:19.324835 containerd[1955]: time="2025-02-13T15:09:19.324403093Z" level=info msg="StartContainer for \"5f8d625dd65f3b307441ed352c0b4a3f64c4de657e4de43629d57825ddedddf5\" returns successfully" Feb 13 15:09:19.333879 containerd[1955]: time="2025-02-13T15:09:19.333829621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6mbbr,Uid:b40c1a18-6a44-4b15-8ecc-b6cba91f498e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b66c51ce28b4b25f01d3c6a6916daa128893904596aa004d013a88182c2ddd7\"" Feb 13 15:09:19.558639 kubelet[3210]: I0213 15:09:19.558375 3210 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ld4dr" podStartSLOduration=1.558352682 podStartE2EDuration="1.558352682s" podCreationTimestamp="2025-02-13 15:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:09:19.558252182 +0000 UTC m=+6.403368141" watchObservedRunningTime="2025-02-13 15:09:19.558352682 +0000 UTC m=+6.403468665" Feb 13 15:09:30.994696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2207030154.mount: Deactivated successfully. Feb 13 15:09:33.688555 containerd[1955]: time="2025-02-13T15:09:33.687988252Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:09:33.690314 containerd[1955]: time="2025-02-13T15:09:33.690246628Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 15:09:33.692216 containerd[1955]: time="2025-02-13T15:09:33.692120716Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:09:33.696156 containerd[1955]: time="2025-02-13T15:09:33.695872288Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 14.607314172s" Feb 13 15:09:33.696156 containerd[1955]: time="2025-02-13T15:09:33.695932240Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 15:09:33.699301 containerd[1955]: time="2025-02-13T15:09:33.699008164Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:09:33.703327 containerd[1955]: time="2025-02-13T15:09:33.703208728Z" level=info msg="CreateContainer within sandbox \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:09:33.735243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3799567664.mount: Deactivated successfully. Feb 13 15:09:33.743211 containerd[1955]: time="2025-02-13T15:09:33.743086816Z" level=info msg="CreateContainer within sandbox \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"87a7ae4d9505a5a8b9ab84fb97815f8f1ae2ca859227d5ae375d7aaffeb9e0c0\"" Feb 13 15:09:33.744798 containerd[1955]: time="2025-02-13T15:09:33.744733216Z" level=info msg="StartContainer for \"87a7ae4d9505a5a8b9ab84fb97815f8f1ae2ca859227d5ae375d7aaffeb9e0c0\"" Feb 13 15:09:33.821291 systemd[1]: Started cri-containerd-87a7ae4d9505a5a8b9ab84fb97815f8f1ae2ca859227d5ae375d7aaffeb9e0c0.scope - libcontainer container 87a7ae4d9505a5a8b9ab84fb97815f8f1ae2ca859227d5ae375d7aaffeb9e0c0. Feb 13 15:09:33.878538 containerd[1955]: time="2025-02-13T15:09:33.878252381Z" level=info msg="StartContainer for \"87a7ae4d9505a5a8b9ab84fb97815f8f1ae2ca859227d5ae375d7aaffeb9e0c0\" returns successfully" Feb 13 15:09:33.897230 systemd[1]: cri-containerd-87a7ae4d9505a5a8b9ab84fb97815f8f1ae2ca859227d5ae375d7aaffeb9e0c0.scope: Deactivated successfully. Feb 13 15:09:34.725174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87a7ae4d9505a5a8b9ab84fb97815f8f1ae2ca859227d5ae375d7aaffeb9e0c0-rootfs.mount: Deactivated successfully. Feb 13 15:09:35.554167 containerd[1955]: time="2025-02-13T15:09:35.554068961Z" level=info msg="shim disconnected" id=87a7ae4d9505a5a8b9ab84fb97815f8f1ae2ca859227d5ae375d7aaffeb9e0c0 namespace=k8s.io Feb 13 15:09:35.554167 containerd[1955]: time="2025-02-13T15:09:35.554140661Z" level=warning msg="cleaning up after shim disconnected" id=87a7ae4d9505a5a8b9ab84fb97815f8f1ae2ca859227d5ae375d7aaffeb9e0c0 namespace=k8s.io Feb 13 15:09:35.554167 containerd[1955]: time="2025-02-13T15:09:35.554160365Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:09:35.600077 containerd[1955]: time="2025-02-13T15:09:35.599994126Z" level=info msg="CreateContainer within sandbox \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:09:35.640603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1887875534.mount: Deactivated successfully. Feb 13 15:09:35.643990 containerd[1955]: time="2025-02-13T15:09:35.643887690Z" level=info msg="CreateContainer within sandbox \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8155bca9f6aa3eacd1a6fad8a5e04862f1413c811a93cd831addc5a928d1ab04\"" Feb 13 15:09:35.646160 containerd[1955]: time="2025-02-13T15:09:35.645899178Z" level=info msg="StartContainer for \"8155bca9f6aa3eacd1a6fad8a5e04862f1413c811a93cd831addc5a928d1ab04\"" Feb 13 15:09:35.698266 systemd[1]: Started cri-containerd-8155bca9f6aa3eacd1a6fad8a5e04862f1413c811a93cd831addc5a928d1ab04.scope - libcontainer container 8155bca9f6aa3eacd1a6fad8a5e04862f1413c811a93cd831addc5a928d1ab04. Feb 13 15:09:35.777967 containerd[1955]: time="2025-02-13T15:09:35.777897283Z" level=info msg="StartContainer for \"8155bca9f6aa3eacd1a6fad8a5e04862f1413c811a93cd831addc5a928d1ab04\" returns successfully" Feb 13 15:09:35.800147 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:09:35.801695 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:09:35.802608 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:09:35.812820 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:09:35.818898 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:09:35.821469 systemd[1]: cri-containerd-8155bca9f6aa3eacd1a6fad8a5e04862f1413c811a93cd831addc5a928d1ab04.scope: Deactivated successfully. Feb 13 15:09:35.865196 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:09:35.881474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8155bca9f6aa3eacd1a6fad8a5e04862f1413c811a93cd831addc5a928d1ab04-rootfs.mount: Deactivated successfully. Feb 13 15:09:35.890441 containerd[1955]: time="2025-02-13T15:09:35.890347627Z" level=info msg="shim disconnected" id=8155bca9f6aa3eacd1a6fad8a5e04862f1413c811a93cd831addc5a928d1ab04 namespace=k8s.io Feb 13 15:09:35.890441 containerd[1955]: time="2025-02-13T15:09:35.890437087Z" level=warning msg="cleaning up after shim disconnected" id=8155bca9f6aa3eacd1a6fad8a5e04862f1413c811a93cd831addc5a928d1ab04 namespace=k8s.io Feb 13 15:09:35.890802 containerd[1955]: time="2025-02-13T15:09:35.890459671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:09:36.600998 containerd[1955]: time="2025-02-13T15:09:36.600614695Z" level=info msg="CreateContainer within sandbox \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:09:36.670746 containerd[1955]: time="2025-02-13T15:09:36.670298143Z" level=info msg="CreateContainer within sandbox \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9569b2b27324856a00954ef1e26f56d485c9f9ad40898aba8193294513f1eaa2\"" Feb 13 15:09:36.676231 containerd[1955]: time="2025-02-13T15:09:36.674141587Z" level=info msg="StartContainer for \"9569b2b27324856a00954ef1e26f56d485c9f9ad40898aba8193294513f1eaa2\"" Feb 13 15:09:36.736906 systemd[1]: Started cri-containerd-9569b2b27324856a00954ef1e26f56d485c9f9ad40898aba8193294513f1eaa2.scope - libcontainer container 9569b2b27324856a00954ef1e26f56d485c9f9ad40898aba8193294513f1eaa2. Feb 13 15:09:36.802750 systemd[1]: cri-containerd-9569b2b27324856a00954ef1e26f56d485c9f9ad40898aba8193294513f1eaa2.scope: Deactivated successfully. Feb 13 15:09:36.824674 containerd[1955]: time="2025-02-13T15:09:36.824503664Z" level=info msg="StartContainer for \"9569b2b27324856a00954ef1e26f56d485c9f9ad40898aba8193294513f1eaa2\" returns successfully" Feb 13 15:09:36.862320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9569b2b27324856a00954ef1e26f56d485c9f9ad40898aba8193294513f1eaa2-rootfs.mount: Deactivated successfully. Feb 13 15:09:36.869318 containerd[1955]: time="2025-02-13T15:09:36.869219336Z" level=info msg="shim disconnected" id=9569b2b27324856a00954ef1e26f56d485c9f9ad40898aba8193294513f1eaa2 namespace=k8s.io Feb 13 15:09:36.869318 containerd[1955]: time="2025-02-13T15:09:36.869292452Z" level=warning msg="cleaning up after shim disconnected" id=9569b2b27324856a00954ef1e26f56d485c9f9ad40898aba8193294513f1eaa2 namespace=k8s.io Feb 13 15:09:36.869318 containerd[1955]: time="2025-02-13T15:09:36.869312144Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:09:37.623758 containerd[1955]: time="2025-02-13T15:09:37.623594564Z" level=info msg="CreateContainer within sandbox \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:09:37.679988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1372057019.mount: Deactivated successfully. Feb 13 15:09:37.689043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1037325887.mount: Deactivated successfully. Feb 13 15:09:37.696090 containerd[1955]: time="2025-02-13T15:09:37.695916824Z" level=info msg="CreateContainer within sandbox \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f6754322fdaeb890cca75dbd664bd50118a310f1dac3a28f1f8482b22197c453\"" Feb 13 15:09:37.698877 containerd[1955]: time="2025-02-13T15:09:37.698294060Z" level=info msg="StartContainer for \"f6754322fdaeb890cca75dbd664bd50118a310f1dac3a28f1f8482b22197c453\"" Feb 13 15:09:37.775275 systemd[1]: Started cri-containerd-f6754322fdaeb890cca75dbd664bd50118a310f1dac3a28f1f8482b22197c453.scope - libcontainer container f6754322fdaeb890cca75dbd664bd50118a310f1dac3a28f1f8482b22197c453. Feb 13 15:09:37.840092 systemd[1]: cri-containerd-f6754322fdaeb890cca75dbd664bd50118a310f1dac3a28f1f8482b22197c453.scope: Deactivated successfully. Feb 13 15:09:37.846927 containerd[1955]: time="2025-02-13T15:09:37.846595593Z" level=info msg="StartContainer for \"f6754322fdaeb890cca75dbd664bd50118a310f1dac3a28f1f8482b22197c453\" returns successfully" Feb 13 15:09:37.886900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6754322fdaeb890cca75dbd664bd50118a310f1dac3a28f1f8482b22197c453-rootfs.mount: Deactivated successfully. Feb 13 15:09:38.002384 containerd[1955]: time="2025-02-13T15:09:38.002149806Z" level=info msg="shim disconnected" id=f6754322fdaeb890cca75dbd664bd50118a310f1dac3a28f1f8482b22197c453 namespace=k8s.io Feb 13 15:09:38.002384 containerd[1955]: time="2025-02-13T15:09:38.002222130Z" level=warning msg="cleaning up after shim disconnected" id=f6754322fdaeb890cca75dbd664bd50118a310f1dac3a28f1f8482b22197c453 namespace=k8s.io Feb 13 15:09:38.002384 containerd[1955]: time="2025-02-13T15:09:38.002242026Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:09:38.627305 containerd[1955]: time="2025-02-13T15:09:38.627245277Z" level=info msg="CreateContainer within sandbox \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:09:38.685301 containerd[1955]: time="2025-02-13T15:09:38.685127805Z" level=info msg="CreateContainer within sandbox \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c\"" Feb 13 15:09:38.688340 containerd[1955]: time="2025-02-13T15:09:38.687725145Z" level=info msg="StartContainer for \"0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c\"" Feb 13 15:09:38.768385 systemd[1]: Started cri-containerd-0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c.scope - libcontainer container 0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c. Feb 13 15:09:38.842003 containerd[1955]: time="2025-02-13T15:09:38.841909678Z" level=info msg="StartContainer for \"0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c\" returns successfully" Feb 13 15:09:38.868140 containerd[1955]: time="2025-02-13T15:09:38.868044430Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:09:38.871083 containerd[1955]: time="2025-02-13T15:09:38.870142882Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 15:09:38.879365 containerd[1955]: time="2025-02-13T15:09:38.878989342Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:09:38.890369 containerd[1955]: time="2025-02-13T15:09:38.889123126Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 5.19004391s" Feb 13 15:09:38.890369 containerd[1955]: time="2025-02-13T15:09:38.889191442Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 15:09:38.902026 containerd[1955]: time="2025-02-13T15:09:38.900574306Z" level=info msg="CreateContainer within sandbox \"1b66c51ce28b4b25f01d3c6a6916daa128893904596aa004d013a88182c2ddd7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:09:38.960422 containerd[1955]: time="2025-02-13T15:09:38.960277822Z" level=info msg="CreateContainer within sandbox \"1b66c51ce28b4b25f01d3c6a6916daa128893904596aa004d013a88182c2ddd7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd\"" Feb 13 15:09:38.964938 containerd[1955]: time="2025-02-13T15:09:38.964862470Z" level=info msg="StartContainer for \"f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd\"" Feb 13 15:09:39.039287 systemd[1]: Started cri-containerd-f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd.scope - libcontainer container f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd. Feb 13 15:09:39.144678 containerd[1955]: time="2025-02-13T15:09:39.144277423Z" level=info msg="StartContainer for \"f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd\" returns successfully" Feb 13 15:09:39.159483 kubelet[3210]: I0213 15:09:39.159330 3210 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 15:09:39.229327 systemd[1]: Created slice kubepods-burstable-pod86b77661_b16b_4fd9_8c91_37bfd8e52272.slice - libcontainer container kubepods-burstable-pod86b77661_b16b_4fd9_8c91_37bfd8e52272.slice. Feb 13 15:09:39.251040 systemd[1]: Created slice kubepods-burstable-pod5c0eec8b_0682_49fc_9a97_9f89d171f464.slice - libcontainer container kubepods-burstable-pod5c0eec8b_0682_49fc_9a97_9f89d171f464.slice. Feb 13 15:09:39.286310 kubelet[3210]: I0213 15:09:39.286220 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86b77661-b16b-4fd9-8c91-37bfd8e52272-config-volume\") pod \"coredns-6f6b679f8f-p52gq\" (UID: \"86b77661-b16b-4fd9-8c91-37bfd8e52272\") " pod="kube-system/coredns-6f6b679f8f-p52gq" Feb 13 15:09:39.286714 kubelet[3210]: I0213 15:09:39.286306 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c0eec8b-0682-49fc-9a97-9f89d171f464-config-volume\") pod \"coredns-6f6b679f8f-n7q5n\" (UID: \"5c0eec8b-0682-49fc-9a97-9f89d171f464\") " pod="kube-system/coredns-6f6b679f8f-n7q5n" Feb 13 15:09:39.286714 kubelet[3210]: I0213 15:09:39.286403 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpwxz\" (UniqueName: \"kubernetes.io/projected/5c0eec8b-0682-49fc-9a97-9f89d171f464-kube-api-access-tpwxz\") pod \"coredns-6f6b679f8f-n7q5n\" (UID: \"5c0eec8b-0682-49fc-9a97-9f89d171f464\") " pod="kube-system/coredns-6f6b679f8f-n7q5n" Feb 13 15:09:39.286714 kubelet[3210]: I0213 15:09:39.286455 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57hlk\" (UniqueName: \"kubernetes.io/projected/86b77661-b16b-4fd9-8c91-37bfd8e52272-kube-api-access-57hlk\") pod \"coredns-6f6b679f8f-p52gq\" (UID: \"86b77661-b16b-4fd9-8c91-37bfd8e52272\") " pod="kube-system/coredns-6f6b679f8f-p52gq" Feb 13 15:09:39.539746 containerd[1955]: time="2025-02-13T15:09:39.539680653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-p52gq,Uid:86b77661-b16b-4fd9-8c91-37bfd8e52272,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:39.559454 containerd[1955]: time="2025-02-13T15:09:39.559356369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-n7q5n,Uid:5c0eec8b-0682-49fc-9a97-9f89d171f464,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:39.780244 kubelet[3210]: I0213 15:09:39.778620 3210 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-6mbbr" podStartSLOduration=2.223710217 podStartE2EDuration="21.778596502s" podCreationTimestamp="2025-02-13 15:09:18 +0000 UTC" firstStartedPulling="2025-02-13 15:09:19.339564313 +0000 UTC m=+6.184680272" lastFinishedPulling="2025-02-13 15:09:38.89445061 +0000 UTC m=+25.739566557" observedRunningTime="2025-02-13 15:09:39.688885474 +0000 UTC m=+26.534001469" watchObservedRunningTime="2025-02-13 15:09:39.778596502 +0000 UTC m=+26.623712473" Feb 13 15:09:39.780244 kubelet[3210]: I0213 15:09:39.779213 3210 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hgrrg" podStartSLOduration=7.166205898 podStartE2EDuration="21.77919511s" podCreationTimestamp="2025-02-13 15:09:18 +0000 UTC" firstStartedPulling="2025-02-13 15:09:19.085199784 +0000 UTC m=+5.930315743" lastFinishedPulling="2025-02-13 15:09:33.698188912 +0000 UTC m=+20.543304955" observedRunningTime="2025-02-13 15:09:39.778574866 +0000 UTC m=+26.623690861" watchObservedRunningTime="2025-02-13 15:09:39.77919511 +0000 UTC m=+26.624311093" Feb 13 15:09:44.488613 systemd-networkd[1872]: cilium_host: Link UP Feb 13 15:09:44.488935 systemd-networkd[1872]: cilium_net: Link UP Feb 13 15:09:44.490987 systemd-networkd[1872]: cilium_net: Gained carrier Feb 13 15:09:44.491738 systemd-networkd[1872]: cilium_host: Gained carrier Feb 13 15:09:44.500790 (udev-worker)[4233]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:09:44.503415 (udev-worker)[4234]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:09:44.660621 (udev-worker)[4232]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:09:44.674787 systemd-networkd[1872]: cilium_vxlan: Link UP Feb 13 15:09:44.674801 systemd-networkd[1872]: cilium_vxlan: Gained carrier Feb 13 15:09:45.164400 systemd-networkd[1872]: cilium_net: Gained IPv6LL Feb 13 15:09:45.164793 systemd-networkd[1872]: cilium_host: Gained IPv6LL Feb 13 15:09:45.181239 kernel: NET: Registered PF_ALG protocol family Feb 13 15:09:46.560558 systemd-networkd[1872]: lxc_health: Link UP Feb 13 15:09:46.581487 systemd-networkd[1872]: cilium_vxlan: Gained IPv6LL Feb 13 15:09:46.601873 systemd-networkd[1872]: lxc_health: Gained carrier Feb 13 15:09:47.165195 kernel: eth0: renamed from tmp3fad9 Feb 13 15:09:47.162686 systemd-networkd[1872]: lxca7fa7dc1503f: Link UP Feb 13 15:09:47.172042 systemd-networkd[1872]: lxca7fa7dc1503f: Gained carrier Feb 13 15:09:47.231645 systemd-networkd[1872]: lxc655c76dd982f: Link UP Feb 13 15:09:47.233856 (udev-worker)[4242]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:09:47.238990 kernel: eth0: renamed from tmpd6507 Feb 13 15:09:47.244995 systemd-networkd[1872]: lxc655c76dd982f: Gained carrier Feb 13 15:09:47.980154 systemd-networkd[1872]: lxc_health: Gained IPv6LL Feb 13 15:09:48.748760 systemd-networkd[1872]: lxc655c76dd982f: Gained IPv6LL Feb 13 15:09:48.940526 systemd-networkd[1872]: lxca7fa7dc1503f: Gained IPv6LL Feb 13 15:09:51.180786 ntpd[1926]: Listen normally on 8 cilium_host 192.168.0.137:123 Feb 13 15:09:51.180926 ntpd[1926]: Listen normally on 9 cilium_net [fe80::78a3:c4ff:fec5:173a%4]:123 Feb 13 15:09:51.181621 ntpd[1926]: 13 Feb 15:09:51 ntpd[1926]: Listen normally on 8 cilium_host 192.168.0.137:123 Feb 13 15:09:51.182260 ntpd[1926]: 13 Feb 15:09:51 ntpd[1926]: Listen normally on 9 cilium_net [fe80::78a3:c4ff:fec5:173a%4]:123 Feb 13 15:09:51.182260 ntpd[1926]: 13 Feb 15:09:51 ntpd[1926]: Listen normally on 10 cilium_host [fe80::10e8:49ff:fe39:f35b%5]:123 Feb 13 15:09:51.182260 ntpd[1926]: 13 Feb 15:09:51 ntpd[1926]: Listen normally on 11 cilium_vxlan [fe80::80f9:aeff:feb1:ea96%6]:123 Feb 13 15:09:51.182260 ntpd[1926]: 13 Feb 15:09:51 ntpd[1926]: Listen normally on 12 lxc_health [fe80::c043:adff:fec9:900b%8]:123 Feb 13 15:09:51.182260 ntpd[1926]: 13 Feb 15:09:51 ntpd[1926]: Listen normally on 13 lxca7fa7dc1503f [fe80::1861:cfff:fea7:8ac1%10]:123 Feb 13 15:09:51.182260 ntpd[1926]: 13 Feb 15:09:51 ntpd[1926]: Listen normally on 14 lxc655c76dd982f [fe80::48e4:76ff:febe:c874%12]:123 Feb 13 15:09:51.181849 ntpd[1926]: Listen normally on 10 cilium_host [fe80::10e8:49ff:fe39:f35b%5]:123 Feb 13 15:09:51.181927 ntpd[1926]: Listen normally on 11 cilium_vxlan [fe80::80f9:aeff:feb1:ea96%6]:123 Feb 13 15:09:51.182025 ntpd[1926]: Listen normally on 12 lxc_health [fe80::c043:adff:fec9:900b%8]:123 Feb 13 15:09:51.182096 ntpd[1926]: Listen normally on 13 lxca7fa7dc1503f [fe80::1861:cfff:fea7:8ac1%10]:123 Feb 13 15:09:51.182164 ntpd[1926]: Listen normally on 14 lxc655c76dd982f [fe80::48e4:76ff:febe:c874%12]:123 Feb 13 15:09:55.746038 containerd[1955]: time="2025-02-13T15:09:55.745852238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:09:55.746790 containerd[1955]: time="2025-02-13T15:09:55.746356970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:09:55.746790 containerd[1955]: time="2025-02-13T15:09:55.746453594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:55.749996 containerd[1955]: time="2025-02-13T15:09:55.747639722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:55.794780 systemd[1]: Started cri-containerd-3fad99fe36152cae797661613901032d4769fbe11e059552a89e83cc3fdd7815.scope - libcontainer container 3fad99fe36152cae797661613901032d4769fbe11e059552a89e83cc3fdd7815. Feb 13 15:09:55.830984 containerd[1955]: time="2025-02-13T15:09:55.828748634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:09:55.830984 containerd[1955]: time="2025-02-13T15:09:55.828896174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:09:55.830984 containerd[1955]: time="2025-02-13T15:09:55.828924722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:55.830984 containerd[1955]: time="2025-02-13T15:09:55.829201658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:55.910310 systemd[1]: Started cri-containerd-d6507f25aa0e38e6e3c2b04d595e44c37f69844c7c0ea728da8843f16cda9e14.scope - libcontainer container d6507f25aa0e38e6e3c2b04d595e44c37f69844c7c0ea728da8843f16cda9e14. Feb 13 15:09:55.941092 containerd[1955]: time="2025-02-13T15:09:55.940297191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-p52gq,Uid:86b77661-b16b-4fd9-8c91-37bfd8e52272,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fad99fe36152cae797661613901032d4769fbe11e059552a89e83cc3fdd7815\"" Feb 13 15:09:55.953143 containerd[1955]: time="2025-02-13T15:09:55.953075439Z" level=info msg="CreateContainer within sandbox \"3fad99fe36152cae797661613901032d4769fbe11e059552a89e83cc3fdd7815\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:09:56.020605 containerd[1955]: time="2025-02-13T15:09:56.019796303Z" level=info msg="CreateContainer within sandbox \"3fad99fe36152cae797661613901032d4769fbe11e059552a89e83cc3fdd7815\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"140828b9d3b049dcbee53c794448fe1d19a2edeee082d6ab3e34372241d46ac8\"" Feb 13 15:09:56.024806 containerd[1955]: time="2025-02-13T15:09:56.024332795Z" level=info msg="StartContainer for \"140828b9d3b049dcbee53c794448fe1d19a2edeee082d6ab3e34372241d46ac8\"" Feb 13 15:09:56.120497 systemd[1]: Started cri-containerd-140828b9d3b049dcbee53c794448fe1d19a2edeee082d6ab3e34372241d46ac8.scope - libcontainer container 140828b9d3b049dcbee53c794448fe1d19a2edeee082d6ab3e34372241d46ac8. Feb 13 15:09:56.139272 containerd[1955]: time="2025-02-13T15:09:56.139068648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-n7q5n,Uid:5c0eec8b-0682-49fc-9a97-9f89d171f464,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6507f25aa0e38e6e3c2b04d595e44c37f69844c7c0ea728da8843f16cda9e14\"" Feb 13 15:09:56.153781 containerd[1955]: time="2025-02-13T15:09:56.153500148Z" level=info msg="CreateContainer within sandbox \"d6507f25aa0e38e6e3c2b04d595e44c37f69844c7c0ea728da8843f16cda9e14\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:09:56.184434 containerd[1955]: time="2025-02-13T15:09:56.184363032Z" level=info msg="CreateContainer within sandbox \"d6507f25aa0e38e6e3c2b04d595e44c37f69844c7c0ea728da8843f16cda9e14\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4c2c71cb683ada4eba1b12d8009d91b99b42279a2dfad6bcdcadf8dbea74ca39\"" Feb 13 15:09:56.187860 containerd[1955]: time="2025-02-13T15:09:56.185877720Z" level=info msg="StartContainer for \"4c2c71cb683ada4eba1b12d8009d91b99b42279a2dfad6bcdcadf8dbea74ca39\"" Feb 13 15:09:56.260886 containerd[1955]: time="2025-02-13T15:09:56.260790816Z" level=info msg="StartContainer for \"140828b9d3b049dcbee53c794448fe1d19a2edeee082d6ab3e34372241d46ac8\" returns successfully" Feb 13 15:09:56.278077 systemd[1]: Started cri-containerd-4c2c71cb683ada4eba1b12d8009d91b99b42279a2dfad6bcdcadf8dbea74ca39.scope - libcontainer container 4c2c71cb683ada4eba1b12d8009d91b99b42279a2dfad6bcdcadf8dbea74ca39. Feb 13 15:09:56.379159 containerd[1955]: time="2025-02-13T15:09:56.379071757Z" level=info msg="StartContainer for \"4c2c71cb683ada4eba1b12d8009d91b99b42279a2dfad6bcdcadf8dbea74ca39\" returns successfully" Feb 13 15:09:56.754196 kubelet[3210]: I0213 15:09:56.754056 3210 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-p52gq" podStartSLOduration=38.754027695 podStartE2EDuration="38.754027695s" podCreationTimestamp="2025-02-13 15:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:09:56.750145947 +0000 UTC m=+43.595261942" watchObservedRunningTime="2025-02-13 15:09:56.754027695 +0000 UTC m=+43.599143738" Feb 13 15:09:56.779253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount580897068.mount: Deactivated successfully. Feb 13 15:10:00.194667 systemd[1]: Started sshd@7-172.31.30.142:22-139.178.68.195:36620.service - OpenSSH per-connection server daemon (139.178.68.195:36620). Feb 13 15:10:00.389152 sshd[4774]: Accepted publickey for core from 139.178.68.195 port 36620 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:00.392159 sshd-session[4774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:00.402136 systemd-logind[1931]: New session 8 of user core. Feb 13 15:10:00.409226 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:10:00.680784 sshd[4776]: Connection closed by 139.178.68.195 port 36620 Feb 13 15:10:00.681747 sshd-session[4774]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:00.688367 systemd[1]: sshd@7-172.31.30.142:22-139.178.68.195:36620.service: Deactivated successfully. Feb 13 15:10:00.692666 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:10:00.694576 systemd-logind[1931]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:10:00.696619 systemd-logind[1931]: Removed session 8. Feb 13 15:10:05.726658 systemd[1]: Started sshd@8-172.31.30.142:22-139.178.68.195:36622.service - OpenSSH per-connection server daemon (139.178.68.195:36622). Feb 13 15:10:05.921504 sshd[4791]: Accepted publickey for core from 139.178.68.195 port 36622 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:05.924245 sshd-session[4791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:05.934498 systemd-logind[1931]: New session 9 of user core. Feb 13 15:10:05.941137 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:10:06.200917 sshd[4793]: Connection closed by 139.178.68.195 port 36622 Feb 13 15:10:06.202024 sshd-session[4791]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:06.209243 systemd-logind[1931]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:10:06.210729 systemd[1]: sshd@8-172.31.30.142:22-139.178.68.195:36622.service: Deactivated successfully. Feb 13 15:10:06.217701 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:10:06.224834 systemd-logind[1931]: Removed session 9. Feb 13 15:10:11.247509 systemd[1]: Started sshd@9-172.31.30.142:22-139.178.68.195:50668.service - OpenSSH per-connection server daemon (139.178.68.195:50668). Feb 13 15:10:11.430288 sshd[4805]: Accepted publickey for core from 139.178.68.195 port 50668 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:11.432563 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:11.441755 systemd-logind[1931]: New session 10 of user core. Feb 13 15:10:11.447304 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:10:11.700015 sshd[4807]: Connection closed by 139.178.68.195 port 50668 Feb 13 15:10:11.699244 sshd-session[4805]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:11.706479 systemd[1]: sshd@9-172.31.30.142:22-139.178.68.195:50668.service: Deactivated successfully. Feb 13 15:10:11.711792 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:10:11.716841 systemd-logind[1931]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:10:11.718761 systemd-logind[1931]: Removed session 10. Feb 13 15:10:16.756695 systemd[1]: Started sshd@10-172.31.30.142:22-139.178.68.195:36564.service - OpenSSH per-connection server daemon (139.178.68.195:36564). Feb 13 15:10:16.942213 sshd[4822]: Accepted publickey for core from 139.178.68.195 port 36564 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:16.944889 sshd-session[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:16.956237 systemd-logind[1931]: New session 11 of user core. Feb 13 15:10:16.965352 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:10:17.260142 sshd[4824]: Connection closed by 139.178.68.195 port 36564 Feb 13 15:10:17.261181 sshd-session[4822]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:17.269284 systemd[1]: sshd@10-172.31.30.142:22-139.178.68.195:36564.service: Deactivated successfully. Feb 13 15:10:17.276347 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:10:17.278033 systemd-logind[1931]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:10:17.282679 systemd-logind[1931]: Removed session 11. Feb 13 15:10:22.309467 systemd[1]: Started sshd@11-172.31.30.142:22-139.178.68.195:36578.service - OpenSSH per-connection server daemon (139.178.68.195:36578). Feb 13 15:10:22.496178 sshd[4839]: Accepted publickey for core from 139.178.68.195 port 36578 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:22.499579 sshd-session[4839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:22.511718 systemd-logind[1931]: New session 12 of user core. Feb 13 15:10:22.518324 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:10:22.777104 sshd[4841]: Connection closed by 139.178.68.195 port 36578 Feb 13 15:10:22.778166 sshd-session[4839]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:22.785304 systemd[1]: sshd@11-172.31.30.142:22-139.178.68.195:36578.service: Deactivated successfully. Feb 13 15:10:22.791963 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:10:22.794328 systemd-logind[1931]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:10:22.797351 systemd-logind[1931]: Removed session 12. Feb 13 15:10:22.822609 systemd[1]: Started sshd@12-172.31.30.142:22-139.178.68.195:36594.service - OpenSSH per-connection server daemon (139.178.68.195:36594). Feb 13 15:10:23.009097 sshd[4853]: Accepted publickey for core from 139.178.68.195 port 36594 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:23.011990 sshd-session[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:23.023445 systemd-logind[1931]: New session 13 of user core. Feb 13 15:10:23.032251 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:10:23.375111 sshd[4855]: Connection closed by 139.178.68.195 port 36594 Feb 13 15:10:23.378273 sshd-session[4853]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:23.391331 systemd[1]: sshd@12-172.31.30.142:22-139.178.68.195:36594.service: Deactivated successfully. Feb 13 15:10:23.403532 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:10:23.409213 systemd-logind[1931]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:10:23.435936 systemd[1]: Started sshd@13-172.31.30.142:22-139.178.68.195:36602.service - OpenSSH per-connection server daemon (139.178.68.195:36602). Feb 13 15:10:23.440046 systemd-logind[1931]: Removed session 13. Feb 13 15:10:23.650212 sshd[4864]: Accepted publickey for core from 139.178.68.195 port 36602 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:23.652905 sshd-session[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:23.667659 systemd-logind[1931]: New session 14 of user core. Feb 13 15:10:23.682309 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:10:23.979454 sshd[4867]: Connection closed by 139.178.68.195 port 36602 Feb 13 15:10:23.980284 sshd-session[4864]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:23.988039 systemd[1]: sshd@13-172.31.30.142:22-139.178.68.195:36602.service: Deactivated successfully. Feb 13 15:10:23.993434 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:10:23.996367 systemd-logind[1931]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:10:24.000064 systemd-logind[1931]: Removed session 14. Feb 13 15:10:29.030472 systemd[1]: Started sshd@14-172.31.30.142:22-139.178.68.195:38712.service - OpenSSH per-connection server daemon (139.178.68.195:38712). Feb 13 15:10:29.218630 sshd[4882]: Accepted publickey for core from 139.178.68.195 port 38712 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:29.221151 sshd-session[4882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:29.229533 systemd-logind[1931]: New session 15 of user core. Feb 13 15:10:29.249398 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:10:29.530798 sshd[4884]: Connection closed by 139.178.68.195 port 38712 Feb 13 15:10:29.532364 sshd-session[4882]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:29.543793 systemd[1]: sshd@14-172.31.30.142:22-139.178.68.195:38712.service: Deactivated successfully. Feb 13 15:10:29.548659 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:10:29.551896 systemd-logind[1931]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:10:29.555154 systemd-logind[1931]: Removed session 15. Feb 13 15:10:34.577523 systemd[1]: Started sshd@15-172.31.30.142:22-139.178.68.195:38724.service - OpenSSH per-connection server daemon (139.178.68.195:38724). Feb 13 15:10:34.762066 sshd[4897]: Accepted publickey for core from 139.178.68.195 port 38724 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:34.764697 sshd-session[4897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:34.775597 systemd-logind[1931]: New session 16 of user core. Feb 13 15:10:34.783347 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:10:35.046140 sshd[4899]: Connection closed by 139.178.68.195 port 38724 Feb 13 15:10:35.047160 sshd-session[4897]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:35.054880 systemd[1]: sshd@15-172.31.30.142:22-139.178.68.195:38724.service: Deactivated successfully. Feb 13 15:10:35.060538 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:10:35.063098 systemd-logind[1931]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:10:35.064653 systemd-logind[1931]: Removed session 16. Feb 13 15:10:40.092558 systemd[1]: Started sshd@16-172.31.30.142:22-139.178.68.195:35786.service - OpenSSH per-connection server daemon (139.178.68.195:35786). Feb 13 15:10:40.291767 sshd[4914]: Accepted publickey for core from 139.178.68.195 port 35786 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:40.294728 sshd-session[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:40.305395 systemd-logind[1931]: New session 17 of user core. Feb 13 15:10:40.312280 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:10:40.591033 sshd[4916]: Connection closed by 139.178.68.195 port 35786 Feb 13 15:10:40.593521 sshd-session[4914]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:40.603911 systemd[1]: sshd@16-172.31.30.142:22-139.178.68.195:35786.service: Deactivated successfully. Feb 13 15:10:40.608283 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:10:40.610529 systemd-logind[1931]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:10:40.637543 systemd[1]: Started sshd@17-172.31.30.142:22-139.178.68.195:35796.service - OpenSSH per-connection server daemon (139.178.68.195:35796). Feb 13 15:10:40.639975 systemd-logind[1931]: Removed session 17. Feb 13 15:10:40.840131 sshd[4927]: Accepted publickey for core from 139.178.68.195 port 35796 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:40.843108 sshd-session[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:40.855110 systemd-logind[1931]: New session 18 of user core. Feb 13 15:10:40.863340 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:10:41.189417 sshd[4930]: Connection closed by 139.178.68.195 port 35796 Feb 13 15:10:41.190462 sshd-session[4927]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:41.198210 systemd[1]: sshd@17-172.31.30.142:22-139.178.68.195:35796.service: Deactivated successfully. Feb 13 15:10:41.204499 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:10:41.206547 systemd-logind[1931]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:10:41.209670 systemd-logind[1931]: Removed session 18. Feb 13 15:10:41.231580 systemd[1]: Started sshd@18-172.31.30.142:22-139.178.68.195:35808.service - OpenSSH per-connection server daemon (139.178.68.195:35808). Feb 13 15:10:41.428005 sshd[4940]: Accepted publickey for core from 139.178.68.195 port 35808 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:41.434326 sshd-session[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:41.444111 systemd-logind[1931]: New session 19 of user core. Feb 13 15:10:41.449417 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:10:44.286217 sshd[4942]: Connection closed by 139.178.68.195 port 35808 Feb 13 15:10:44.288164 sshd-session[4940]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:44.298480 systemd[1]: sshd@18-172.31.30.142:22-139.178.68.195:35808.service: Deactivated successfully. Feb 13 15:10:44.308582 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:10:44.312818 systemd-logind[1931]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:10:44.343162 systemd[1]: Started sshd@19-172.31.30.142:22-139.178.68.195:35822.service - OpenSSH per-connection server daemon (139.178.68.195:35822). Feb 13 15:10:44.347376 systemd-logind[1931]: Removed session 19. Feb 13 15:10:44.535143 sshd[4958]: Accepted publickey for core from 139.178.68.195 port 35822 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:44.537663 sshd-session[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:44.549049 systemd-logind[1931]: New session 20 of user core. Feb 13 15:10:44.552479 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:10:45.066263 sshd[4961]: Connection closed by 139.178.68.195 port 35822 Feb 13 15:10:45.067996 sshd-session[4958]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:45.074454 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:10:45.077032 systemd[1]: sshd@19-172.31.30.142:22-139.178.68.195:35822.service: Deactivated successfully. Feb 13 15:10:45.085564 systemd-logind[1931]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:10:45.087880 systemd-logind[1931]: Removed session 20. Feb 13 15:10:45.112526 systemd[1]: Started sshd@20-172.31.30.142:22-139.178.68.195:35830.service - OpenSSH per-connection server daemon (139.178.68.195:35830). Feb 13 15:10:45.312024 sshd[4971]: Accepted publickey for core from 139.178.68.195 port 35830 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:45.314802 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:45.326570 systemd-logind[1931]: New session 21 of user core. Feb 13 15:10:45.336593 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:10:45.614915 sshd[4973]: Connection closed by 139.178.68.195 port 35830 Feb 13 15:10:45.614039 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:45.625566 systemd-logind[1931]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:10:45.627842 systemd[1]: sshd@20-172.31.30.142:22-139.178.68.195:35830.service: Deactivated successfully. Feb 13 15:10:45.634363 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:10:45.638803 systemd-logind[1931]: Removed session 21. Feb 13 15:10:50.669495 systemd[1]: Started sshd@21-172.31.30.142:22-139.178.68.195:57020.service - OpenSSH per-connection server daemon (139.178.68.195:57020). Feb 13 15:10:50.864830 sshd[4987]: Accepted publickey for core from 139.178.68.195 port 57020 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:50.867792 sshd-session[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:50.879380 systemd-logind[1931]: New session 22 of user core. Feb 13 15:10:50.885334 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:10:51.154659 sshd[4989]: Connection closed by 139.178.68.195 port 57020 Feb 13 15:10:51.155309 sshd-session[4987]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:51.163908 systemd[1]: sshd@21-172.31.30.142:22-139.178.68.195:57020.service: Deactivated successfully. Feb 13 15:10:51.169377 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:10:51.171771 systemd-logind[1931]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:10:51.174526 systemd-logind[1931]: Removed session 22. Feb 13 15:10:56.202510 systemd[1]: Started sshd@22-172.31.30.142:22-139.178.68.195:57032.service - OpenSSH per-connection server daemon (139.178.68.195:57032). Feb 13 15:10:56.400108 sshd[5005]: Accepted publickey for core from 139.178.68.195 port 57032 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:56.403277 sshd-session[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:56.417303 systemd-logind[1931]: New session 23 of user core. Feb 13 15:10:56.421153 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:10:56.686233 sshd[5007]: Connection closed by 139.178.68.195 port 57032 Feb 13 15:10:56.687301 sshd-session[5005]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:56.694027 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:10:56.697971 systemd[1]: sshd@22-172.31.30.142:22-139.178.68.195:57032.service: Deactivated successfully. Feb 13 15:10:56.703874 systemd-logind[1931]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:10:56.706100 systemd-logind[1931]: Removed session 23. Feb 13 15:11:01.732460 systemd[1]: Started sshd@23-172.31.30.142:22-139.178.68.195:42532.service - OpenSSH per-connection server daemon (139.178.68.195:42532). Feb 13 15:11:01.928716 sshd[5019]: Accepted publickey for core from 139.178.68.195 port 42532 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:11:01.931375 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:01.945344 systemd-logind[1931]: New session 24 of user core. Feb 13 15:11:01.958244 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:11:02.199821 sshd[5021]: Connection closed by 139.178.68.195 port 42532 Feb 13 15:11:02.200726 sshd-session[5019]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:02.207153 systemd[1]: sshd@23-172.31.30.142:22-139.178.68.195:42532.service: Deactivated successfully. Feb 13 15:11:02.211501 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:11:02.213105 systemd-logind[1931]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:11:02.215324 systemd-logind[1931]: Removed session 24. Feb 13 15:11:07.251808 systemd[1]: Started sshd@24-172.31.30.142:22-139.178.68.195:59024.service - OpenSSH per-connection server daemon (139.178.68.195:59024). Feb 13 15:11:07.448559 sshd[5035]: Accepted publickey for core from 139.178.68.195 port 59024 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:11:07.451686 sshd-session[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:07.463023 systemd-logind[1931]: New session 25 of user core. Feb 13 15:11:07.471365 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:11:07.741328 sshd[5037]: Connection closed by 139.178.68.195 port 59024 Feb 13 15:11:07.743242 sshd-session[5035]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:07.752205 systemd[1]: sshd@24-172.31.30.142:22-139.178.68.195:59024.service: Deactivated successfully. Feb 13 15:11:07.756080 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:11:07.758551 systemd-logind[1931]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:11:07.761658 systemd-logind[1931]: Removed session 25. Feb 13 15:11:07.787722 systemd[1]: Started sshd@25-172.31.30.142:22-139.178.68.195:59030.service - OpenSSH per-connection server daemon (139.178.68.195:59030). Feb 13 15:11:07.986402 sshd[5049]: Accepted publickey for core from 139.178.68.195 port 59030 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:11:07.989670 sshd-session[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:07.998628 systemd-logind[1931]: New session 26 of user core. Feb 13 15:11:08.008248 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:11:11.842817 kubelet[3210]: I0213 15:11:11.840811 3210 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-n7q5n" podStartSLOduration=113.840764956 podStartE2EDuration="1m53.840764956s" podCreationTimestamp="2025-02-13 15:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:09:56.823572135 +0000 UTC m=+43.668688130" watchObservedRunningTime="2025-02-13 15:11:11.840764956 +0000 UTC m=+118.685880915" Feb 13 15:11:11.890924 containerd[1955]: time="2025-02-13T15:11:11.888737632Z" level=info msg="StopContainer for \"f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd\" with timeout 30 (s)" Feb 13 15:11:11.897348 containerd[1955]: time="2025-02-13T15:11:11.896350816Z" level=info msg="Stop container \"f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd\" with signal terminated" Feb 13 15:11:11.925804 systemd[1]: cri-containerd-f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd.scope: Deactivated successfully. Feb 13 15:11:11.957372 containerd[1955]: time="2025-02-13T15:11:11.957044908Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:11:11.982029 containerd[1955]: time="2025-02-13T15:11:11.981183028Z" level=info msg="StopContainer for \"0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c\" with timeout 2 (s)" Feb 13 15:11:11.983928 containerd[1955]: time="2025-02-13T15:11:11.983520556Z" level=info msg="Stop container \"0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c\" with signal terminated" Feb 13 15:11:12.012519 systemd-networkd[1872]: lxc_health: Link DOWN Feb 13 15:11:12.012532 systemd-networkd[1872]: lxc_health: Lost carrier Feb 13 15:11:12.020803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd-rootfs.mount: Deactivated successfully. Feb 13 15:11:12.040346 containerd[1955]: time="2025-02-13T15:11:12.038060401Z" level=info msg="shim disconnected" id=f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd namespace=k8s.io Feb 13 15:11:12.040346 containerd[1955]: time="2025-02-13T15:11:12.038147245Z" level=warning msg="cleaning up after shim disconnected" id=f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd namespace=k8s.io Feb 13 15:11:12.040346 containerd[1955]: time="2025-02-13T15:11:12.038169025Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:12.044316 systemd[1]: cri-containerd-0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c.scope: Deactivated successfully. Feb 13 15:11:12.045423 systemd[1]: cri-containerd-0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c.scope: Consumed 14.808s CPU time, 124.9M memory peak, 144K read from disk, 12.9M written to disk. Feb 13 15:11:12.090460 containerd[1955]: time="2025-02-13T15:11:12.088824157Z" level=info msg="StopContainer for \"f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd\" returns successfully" Feb 13 15:11:12.093598 containerd[1955]: time="2025-02-13T15:11:12.093070381Z" level=info msg="StopPodSandbox for \"1b66c51ce28b4b25f01d3c6a6916daa128893904596aa004d013a88182c2ddd7\"" Feb 13 15:11:12.093598 containerd[1955]: time="2025-02-13T15:11:12.093150193Z" level=info msg="Container to stop \"f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:11:12.104531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c-rootfs.mount: Deactivated successfully. Feb 13 15:11:12.105200 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b66c51ce28b4b25f01d3c6a6916daa128893904596aa004d013a88182c2ddd7-shm.mount: Deactivated successfully. Feb 13 15:11:12.120867 containerd[1955]: time="2025-02-13T15:11:12.120675781Z" level=info msg="shim disconnected" id=0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c namespace=k8s.io Feb 13 15:11:12.121445 containerd[1955]: time="2025-02-13T15:11:12.121234225Z" level=warning msg="cleaning up after shim disconnected" id=0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c namespace=k8s.io Feb 13 15:11:12.121809 containerd[1955]: time="2025-02-13T15:11:12.121739893Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:12.125415 systemd[1]: cri-containerd-1b66c51ce28b4b25f01d3c6a6916daa128893904596aa004d013a88182c2ddd7.scope: Deactivated successfully. Feb 13 15:11:12.172567 containerd[1955]: time="2025-02-13T15:11:12.172321561Z" level=info msg="StopContainer for \"0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c\" returns successfully" Feb 13 15:11:12.173675 containerd[1955]: time="2025-02-13T15:11:12.173300329Z" level=info msg="StopPodSandbox for \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\"" Feb 13 15:11:12.173675 containerd[1955]: time="2025-02-13T15:11:12.173377405Z" level=info msg="Container to stop \"8155bca9f6aa3eacd1a6fad8a5e04862f1413c811a93cd831addc5a928d1ab04\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:11:12.173675 containerd[1955]: time="2025-02-13T15:11:12.173403121Z" level=info msg="Container to stop \"9569b2b27324856a00954ef1e26f56d485c9f9ad40898aba8193294513f1eaa2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:11:12.173675 containerd[1955]: time="2025-02-13T15:11:12.173426617Z" level=info msg="Container to stop \"0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:11:12.173675 containerd[1955]: time="2025-02-13T15:11:12.173449513Z" level=info msg="Container to stop \"87a7ae4d9505a5a8b9ab84fb97815f8f1ae2ca859227d5ae375d7aaffeb9e0c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:11:12.173675 containerd[1955]: time="2025-02-13T15:11:12.173470573Z" level=info msg="Container to stop \"f6754322fdaeb890cca75dbd664bd50118a310f1dac3a28f1f8482b22197c453\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:11:12.196612 systemd[1]: cri-containerd-f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa.scope: Deactivated successfully. Feb 13 15:11:12.253783 containerd[1955]: time="2025-02-13T15:11:12.253627322Z" level=info msg="shim disconnected" id=f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa namespace=k8s.io Feb 13 15:11:12.253783 containerd[1955]: time="2025-02-13T15:11:12.253728746Z" level=warning msg="cleaning up after shim disconnected" id=f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa namespace=k8s.io Feb 13 15:11:12.253783 containerd[1955]: time="2025-02-13T15:11:12.253751366Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:12.255057 containerd[1955]: time="2025-02-13T15:11:12.254840270Z" level=info msg="shim disconnected" id=1b66c51ce28b4b25f01d3c6a6916daa128893904596aa004d013a88182c2ddd7 namespace=k8s.io Feb 13 15:11:12.255057 containerd[1955]: time="2025-02-13T15:11:12.254939198Z" level=warning msg="cleaning up after shim disconnected" id=1b66c51ce28b4b25f01d3c6a6916daa128893904596aa004d013a88182c2ddd7 namespace=k8s.io Feb 13 15:11:12.255057 containerd[1955]: time="2025-02-13T15:11:12.254982410Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:12.288327 containerd[1955]: time="2025-02-13T15:11:12.288172454Z" level=info msg="TearDown network for sandbox \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\" successfully" Feb 13 15:11:12.288590 containerd[1955]: time="2025-02-13T15:11:12.288299702Z" level=info msg="StopPodSandbox for \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\" returns successfully" Feb 13 15:11:12.288865 containerd[1955]: time="2025-02-13T15:11:12.288700886Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:11:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:11:12.291801 containerd[1955]: time="2025-02-13T15:11:12.291703586Z" level=info msg="TearDown network for sandbox \"1b66c51ce28b4b25f01d3c6a6916daa128893904596aa004d013a88182c2ddd7\" successfully" Feb 13 15:11:12.291801 containerd[1955]: time="2025-02-13T15:11:12.291758474Z" level=info msg="StopPodSandbox for \"1b66c51ce28b4b25f01d3c6a6916daa128893904596aa004d013a88182c2ddd7\" returns successfully" Feb 13 15:11:12.359804 kubelet[3210]: I0213 15:11:12.358710 3210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58f22\" (UniqueName: \"kubernetes.io/projected/368bec3b-1909-4277-a6d5-89daa02ed593-kube-api-access-58f22\") pod \"368bec3b-1909-4277-a6d5-89daa02ed593\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " Feb 13 15:11:12.359804 kubelet[3210]: I0213 15:11:12.358774 3210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-bpf-maps\") pod \"368bec3b-1909-4277-a6d5-89daa02ed593\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " Feb 13 15:11:12.359804 kubelet[3210]: I0213 15:11:12.358814 3210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-xtables-lock\") pod \"368bec3b-1909-4277-a6d5-89daa02ed593\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " Feb 13 15:11:12.359804 kubelet[3210]: I0213 15:11:12.358855 3210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/368bec3b-1909-4277-a6d5-89daa02ed593-cilium-config-path\") pod \"368bec3b-1909-4277-a6d5-89daa02ed593\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " Feb 13 15:11:12.359804 kubelet[3210]: I0213 15:11:12.358891 3210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-host-proc-sys-net\") pod \"368bec3b-1909-4277-a6d5-89daa02ed593\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " Feb 13 15:11:12.359804 kubelet[3210]: I0213 15:11:12.359622 3210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptxp6\" (UniqueName: \"kubernetes.io/projected/b40c1a18-6a44-4b15-8ecc-b6cba91f498e-kube-api-access-ptxp6\") pod \"b40c1a18-6a44-4b15-8ecc-b6cba91f498e\" (UID: \"b40c1a18-6a44-4b15-8ecc-b6cba91f498e\") " Feb 13 15:11:12.360303 kubelet[3210]: I0213 15:11:12.359860 3210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "368bec3b-1909-4277-a6d5-89daa02ed593" (UID: "368bec3b-1909-4277-a6d5-89daa02ed593"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:11:12.362316 kubelet[3210]: I0213 15:11:12.361572 3210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-lib-modules\") pod \"368bec3b-1909-4277-a6d5-89daa02ed593\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " Feb 13 15:11:12.362316 kubelet[3210]: I0213 15:11:12.361725 3210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/368bec3b-1909-4277-a6d5-89daa02ed593-hubble-tls\") pod \"368bec3b-1909-4277-a6d5-89daa02ed593\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " Feb 13 15:11:12.362316 kubelet[3210]: I0213 15:11:12.361769 3210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-host-proc-sys-kernel\") pod \"368bec3b-1909-4277-a6d5-89daa02ed593\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " Feb 13 15:11:12.362316 kubelet[3210]: I0213 15:11:12.361838 3210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-cilium-cgroup\") pod \"368bec3b-1909-4277-a6d5-89daa02ed593\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " Feb 13 15:11:12.362316 kubelet[3210]: I0213 15:11:12.362128 3210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-cni-path\") pod \"368bec3b-1909-4277-a6d5-89daa02ed593\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " Feb 13 15:11:12.362316 kubelet[3210]: I0213 15:11:12.362239 3210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b40c1a18-6a44-4b15-8ecc-b6cba91f498e-cilium-config-path\") pod \"b40c1a18-6a44-4b15-8ecc-b6cba91f498e\" (UID: \"b40c1a18-6a44-4b15-8ecc-b6cba91f498e\") " Feb 13 15:11:12.363167 kubelet[3210]: I0213 15:11:12.362796 3210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-hostproc\") pod \"368bec3b-1909-4277-a6d5-89daa02ed593\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " Feb 13 15:11:12.363167 kubelet[3210]: I0213 15:11:12.362931 3210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/368bec3b-1909-4277-a6d5-89daa02ed593-clustermesh-secrets\") pod \"368bec3b-1909-4277-a6d5-89daa02ed593\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " Feb 13 15:11:12.363167 kubelet[3210]: I0213 15:11:12.363033 3210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-etc-cni-netd\") pod \"368bec3b-1909-4277-a6d5-89daa02ed593\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " Feb 13 15:11:12.363763 kubelet[3210]: I0213 15:11:12.363102 3210 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-cilium-run\") pod \"368bec3b-1909-4277-a6d5-89daa02ed593\" (UID: \"368bec3b-1909-4277-a6d5-89daa02ed593\") " Feb 13 15:11:12.364081 kubelet[3210]: I0213 15:11:12.363725 3210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "368bec3b-1909-4277-a6d5-89daa02ed593" (UID: "368bec3b-1909-4277-a6d5-89daa02ed593"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:11:12.365110 kubelet[3210]: I0213 15:11:12.364366 3210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "368bec3b-1909-4277-a6d5-89daa02ed593" (UID: "368bec3b-1909-4277-a6d5-89daa02ed593"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:11:12.365110 kubelet[3210]: I0213 15:11:12.364437 3210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "368bec3b-1909-4277-a6d5-89daa02ed593" (UID: "368bec3b-1909-4277-a6d5-89daa02ed593"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:11:12.367539 kubelet[3210]: I0213 15:11:12.366063 3210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "368bec3b-1909-4277-a6d5-89daa02ed593" (UID: "368bec3b-1909-4277-a6d5-89daa02ed593"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:11:12.367539 kubelet[3210]: I0213 15:11:12.366179 3210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "368bec3b-1909-4277-a6d5-89daa02ed593" (UID: "368bec3b-1909-4277-a6d5-89daa02ed593"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:11:12.370652 kubelet[3210]: I0213 15:11:12.370461 3210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-cni-path" (OuterVolumeSpecName: "cni-path") pod "368bec3b-1909-4277-a6d5-89daa02ed593" (UID: "368bec3b-1909-4277-a6d5-89daa02ed593"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:11:12.376667 kubelet[3210]: I0213 15:11:12.376518 3210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "368bec3b-1909-4277-a6d5-89daa02ed593" (UID: "368bec3b-1909-4277-a6d5-89daa02ed593"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:11:12.378062 kubelet[3210]: I0213 15:11:12.377226 3210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/368bec3b-1909-4277-a6d5-89daa02ed593-kube-api-access-58f22" (OuterVolumeSpecName: "kube-api-access-58f22") pod "368bec3b-1909-4277-a6d5-89daa02ed593" (UID: "368bec3b-1909-4277-a6d5-89daa02ed593"). InnerVolumeSpecName "kube-api-access-58f22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:11:12.378062 kubelet[3210]: I0213 15:11:12.377339 3210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-hostproc" (OuterVolumeSpecName: "hostproc") pod "368bec3b-1909-4277-a6d5-89daa02ed593" (UID: "368bec3b-1909-4277-a6d5-89daa02ed593"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:11:12.382629 kubelet[3210]: I0213 15:11:12.382578 3210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "368bec3b-1909-4277-a6d5-89daa02ed593" (UID: "368bec3b-1909-4277-a6d5-89daa02ed593"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:11:12.383381 kubelet[3210]: I0213 15:11:12.383263 3210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/368bec3b-1909-4277-a6d5-89daa02ed593-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "368bec3b-1909-4277-a6d5-89daa02ed593" (UID: "368bec3b-1909-4277-a6d5-89daa02ed593"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:11:12.384238 kubelet[3210]: I0213 15:11:12.384192 3210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/368bec3b-1909-4277-a6d5-89daa02ed593-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "368bec3b-1909-4277-a6d5-89daa02ed593" (UID: "368bec3b-1909-4277-a6d5-89daa02ed593"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:11:12.388248 kubelet[3210]: I0213 15:11:12.388168 3210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b40c1a18-6a44-4b15-8ecc-b6cba91f498e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b40c1a18-6a44-4b15-8ecc-b6cba91f498e" (UID: "b40c1a18-6a44-4b15-8ecc-b6cba91f498e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:11:12.388423 kubelet[3210]: I0213 15:11:12.388316 3210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b40c1a18-6a44-4b15-8ecc-b6cba91f498e-kube-api-access-ptxp6" (OuterVolumeSpecName: "kube-api-access-ptxp6") pod "b40c1a18-6a44-4b15-8ecc-b6cba91f498e" (UID: "b40c1a18-6a44-4b15-8ecc-b6cba91f498e"). InnerVolumeSpecName "kube-api-access-ptxp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:11:12.390939 kubelet[3210]: I0213 15:11:12.390869 3210 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/368bec3b-1909-4277-a6d5-89daa02ed593-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "368bec3b-1909-4277-a6d5-89daa02ed593" (UID: "368bec3b-1909-4277-a6d5-89daa02ed593"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:11:12.465073 kubelet[3210]: I0213 15:11:12.464743 3210 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/368bec3b-1909-4277-a6d5-89daa02ed593-clustermesh-secrets\") on node \"ip-172-31-30-142\" DevicePath \"\"" Feb 13 15:11:12.465073 kubelet[3210]: I0213 15:11:12.464827 3210 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-etc-cni-netd\") on node \"ip-172-31-30-142\" DevicePath \"\"" Feb 13 15:11:12.465073 kubelet[3210]: I0213 15:11:12.464853 3210 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-cilium-run\") on node \"ip-172-31-30-142\" DevicePath \"\"" Feb 13 15:11:12.465073 kubelet[3210]: I0213 15:11:12.464875 3210 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-58f22\" (UniqueName: \"kubernetes.io/projected/368bec3b-1909-4277-a6d5-89daa02ed593-kube-api-access-58f22\") on node \"ip-172-31-30-142\" DevicePath \"\"" Feb 13 15:11:12.465073 kubelet[3210]: I0213 15:11:12.464901 3210 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-bpf-maps\") on node \"ip-172-31-30-142\" DevicePath \"\"" Feb 13 15:11:12.465073 kubelet[3210]: I0213 15:11:12.464924 3210 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-xtables-lock\") on node \"ip-172-31-30-142\" DevicePath \"\"" Feb 13 15:11:12.465073 kubelet[3210]: I0213 15:11:12.464993 3210 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/368bec3b-1909-4277-a6d5-89daa02ed593-cilium-config-path\") on node \"ip-172-31-30-142\" DevicePath \"\"" Feb 13 15:11:12.465073 kubelet[3210]: I0213 15:11:12.465025 3210 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-host-proc-sys-net\") on node \"ip-172-31-30-142\" DevicePath \"\"" Feb 13 15:11:12.465729 kubelet[3210]: I0213 15:11:12.465063 3210 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ptxp6\" (UniqueName: \"kubernetes.io/projected/b40c1a18-6a44-4b15-8ecc-b6cba91f498e-kube-api-access-ptxp6\") on node \"ip-172-31-30-142\" DevicePath \"\"" Feb 13 15:11:12.465729 kubelet[3210]: I0213 15:11:12.465088 3210 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-lib-modules\") on node \"ip-172-31-30-142\" DevicePath \"\"" Feb 13 15:11:12.465729 kubelet[3210]: I0213 15:11:12.465110 3210 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/368bec3b-1909-4277-a6d5-89daa02ed593-hubble-tls\") on node \"ip-172-31-30-142\" DevicePath \"\"" Feb 13 15:11:12.465729 kubelet[3210]: I0213 15:11:12.465131 3210 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-host-proc-sys-kernel\") on node \"ip-172-31-30-142\" DevicePath \"\"" Feb 13 15:11:12.465729 kubelet[3210]: I0213 15:11:12.465151 3210 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-cilium-cgroup\") on node \"ip-172-31-30-142\" DevicePath \"\"" Feb 13 15:11:12.465729 kubelet[3210]: I0213 15:11:12.465171 3210 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-cni-path\") on node \"ip-172-31-30-142\" DevicePath \"\"" Feb 13 15:11:12.465729 kubelet[3210]: I0213 15:11:12.465191 3210 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b40c1a18-6a44-4b15-8ecc-b6cba91f498e-cilium-config-path\") on node \"ip-172-31-30-142\" DevicePath \"\"" Feb 13 15:11:12.465729 kubelet[3210]: I0213 15:11:12.465210 3210 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/368bec3b-1909-4277-a6d5-89daa02ed593-hostproc\") on node \"ip-172-31-30-142\" DevicePath \"\"" Feb 13 15:11:12.885294 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b66c51ce28b4b25f01d3c6a6916daa128893904596aa004d013a88182c2ddd7-rootfs.mount: Deactivated successfully. Feb 13 15:11:12.885480 systemd[1]: var-lib-kubelet-pods-b40c1a18\x2d6a44\x2d4b15\x2d8ecc\x2db6cba91f498e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dptxp6.mount: Deactivated successfully. Feb 13 15:11:12.885629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa-rootfs.mount: Deactivated successfully. Feb 13 15:11:12.885763 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa-shm.mount: Deactivated successfully. Feb 13 15:11:12.885898 systemd[1]: var-lib-kubelet-pods-368bec3b\x2d1909\x2d4277\x2da6d5\x2d89daa02ed593-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d58f22.mount: Deactivated successfully. Feb 13 15:11:12.886077 systemd[1]: var-lib-kubelet-pods-368bec3b\x2d1909\x2d4277\x2da6d5\x2d89daa02ed593-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:11:12.886222 systemd[1]: var-lib-kubelet-pods-368bec3b\x2d1909\x2d4277\x2da6d5\x2d89daa02ed593-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:11:12.962575 kubelet[3210]: I0213 15:11:12.962514 3210 scope.go:117] "RemoveContainer" containerID="f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd" Feb 13 15:11:12.976855 containerd[1955]: time="2025-02-13T15:11:12.974884097Z" level=info msg="RemoveContainer for \"f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd\"" Feb 13 15:11:12.983510 systemd[1]: Removed slice kubepods-besteffort-podb40c1a18_6a44_4b15_8ecc_b6cba91f498e.slice - libcontainer container kubepods-besteffort-podb40c1a18_6a44_4b15_8ecc_b6cba91f498e.slice. Feb 13 15:11:12.999720 containerd[1955]: time="2025-02-13T15:11:12.999396785Z" level=info msg="RemoveContainer for \"f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd\" returns successfully" Feb 13 15:11:13.000840 kubelet[3210]: I0213 15:11:13.000630 3210 scope.go:117] "RemoveContainer" containerID="f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd" Feb 13 15:11:13.003433 containerd[1955]: time="2025-02-13T15:11:13.003135709Z" level=error msg="ContainerStatus for \"f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd\": not found" Feb 13 15:11:13.004211 kubelet[3210]: E0213 15:11:13.003674 3210 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd\": not found" containerID="f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd" Feb 13 15:11:13.004933 kubelet[3210]: I0213 15:11:13.004321 3210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd"} err="failed to get container status \"f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd\": rpc error: code = NotFound desc = an error occurred when try to find container \"f82df1e2649a73e36a9cd8627f5c3565c998a0343aa865df37511af430064bfd\": not found" Feb 13 15:11:13.004933 kubelet[3210]: I0213 15:11:13.004880 3210 scope.go:117] "RemoveContainer" containerID="0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c" Feb 13 15:11:13.011572 containerd[1955]: time="2025-02-13T15:11:13.010141094Z" level=info msg="RemoveContainer for \"0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c\"" Feb 13 15:11:13.025621 systemd[1]: Removed slice kubepods-burstable-pod368bec3b_1909_4277_a6d5_89daa02ed593.slice - libcontainer container kubepods-burstable-pod368bec3b_1909_4277_a6d5_89daa02ed593.slice. Feb 13 15:11:13.026243 systemd[1]: kubepods-burstable-pod368bec3b_1909_4277_a6d5_89daa02ed593.slice: Consumed 14.974s CPU time, 125.3M memory peak, 144K read from disk, 12.9M written to disk. Feb 13 15:11:13.036554 containerd[1955]: time="2025-02-13T15:11:13.036474170Z" level=info msg="RemoveContainer for \"0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c\" returns successfully" Feb 13 15:11:13.037488 kubelet[3210]: I0213 15:11:13.037402 3210 scope.go:117] "RemoveContainer" containerID="f6754322fdaeb890cca75dbd664bd50118a310f1dac3a28f1f8482b22197c453" Feb 13 15:11:13.040232 containerd[1955]: time="2025-02-13T15:11:13.040177718Z" level=info msg="RemoveContainer for \"f6754322fdaeb890cca75dbd664bd50118a310f1dac3a28f1f8482b22197c453\"" Feb 13 15:11:13.047200 containerd[1955]: time="2025-02-13T15:11:13.047076782Z" level=info msg="RemoveContainer for \"f6754322fdaeb890cca75dbd664bd50118a310f1dac3a28f1f8482b22197c453\" returns successfully" Feb 13 15:11:13.048141 kubelet[3210]: I0213 15:11:13.048055 3210 scope.go:117] "RemoveContainer" containerID="9569b2b27324856a00954ef1e26f56d485c9f9ad40898aba8193294513f1eaa2" Feb 13 15:11:13.054412 containerd[1955]: time="2025-02-13T15:11:13.053352878Z" level=info msg="RemoveContainer for \"9569b2b27324856a00954ef1e26f56d485c9f9ad40898aba8193294513f1eaa2\"" Feb 13 15:11:13.061727 containerd[1955]: time="2025-02-13T15:11:13.061610342Z" level=info msg="RemoveContainer for \"9569b2b27324856a00954ef1e26f56d485c9f9ad40898aba8193294513f1eaa2\" returns successfully" Feb 13 15:11:13.064617 kubelet[3210]: I0213 15:11:13.064533 3210 scope.go:117] "RemoveContainer" containerID="8155bca9f6aa3eacd1a6fad8a5e04862f1413c811a93cd831addc5a928d1ab04" Feb 13 15:11:13.071993 containerd[1955]: time="2025-02-13T15:11:13.071213030Z" level=info msg="RemoveContainer for \"8155bca9f6aa3eacd1a6fad8a5e04862f1413c811a93cd831addc5a928d1ab04\"" Feb 13 15:11:13.081671 containerd[1955]: time="2025-02-13T15:11:13.081466322Z" level=info msg="RemoveContainer for \"8155bca9f6aa3eacd1a6fad8a5e04862f1413c811a93cd831addc5a928d1ab04\" returns successfully" Feb 13 15:11:13.082096 kubelet[3210]: I0213 15:11:13.082060 3210 scope.go:117] "RemoveContainer" containerID="87a7ae4d9505a5a8b9ab84fb97815f8f1ae2ca859227d5ae375d7aaffeb9e0c0" Feb 13 15:11:13.084400 containerd[1955]: time="2025-02-13T15:11:13.084018686Z" level=info msg="RemoveContainer for \"87a7ae4d9505a5a8b9ab84fb97815f8f1ae2ca859227d5ae375d7aaffeb9e0c0\"" Feb 13 15:11:13.090209 containerd[1955]: time="2025-02-13T15:11:13.090131654Z" level=info msg="RemoveContainer for \"87a7ae4d9505a5a8b9ab84fb97815f8f1ae2ca859227d5ae375d7aaffeb9e0c0\" returns successfully" Feb 13 15:11:13.090868 kubelet[3210]: I0213 15:11:13.090832 3210 scope.go:117] "RemoveContainer" containerID="0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c" Feb 13 15:11:13.091815 containerd[1955]: time="2025-02-13T15:11:13.091587662Z" level=error msg="ContainerStatus for \"0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c\": not found" Feb 13 15:11:13.092186 kubelet[3210]: E0213 15:11:13.092102 3210 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c\": not found" containerID="0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c" Feb 13 15:11:13.092357 kubelet[3210]: I0213 15:11:13.092232 3210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c"} err="failed to get container status \"0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"0602f8410308ec3977867a6d033bcb0a9cce0382a7708d66ab387267f4784b2c\": not found" Feb 13 15:11:13.092357 kubelet[3210]: I0213 15:11:13.092293 3210 scope.go:117] "RemoveContainer" containerID="f6754322fdaeb890cca75dbd664bd50118a310f1dac3a28f1f8482b22197c453" Feb 13 15:11:13.092862 containerd[1955]: time="2025-02-13T15:11:13.092796482Z" level=error msg="ContainerStatus for \"f6754322fdaeb890cca75dbd664bd50118a310f1dac3a28f1f8482b22197c453\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6754322fdaeb890cca75dbd664bd50118a310f1dac3a28f1f8482b22197c453\": not found" Feb 13 15:11:13.093102 kubelet[3210]: E0213 15:11:13.093059 3210 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6754322fdaeb890cca75dbd664bd50118a310f1dac3a28f1f8482b22197c453\": not found" containerID="f6754322fdaeb890cca75dbd664bd50118a310f1dac3a28f1f8482b22197c453" Feb 13 15:11:13.093173 kubelet[3210]: I0213 15:11:13.093110 3210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6754322fdaeb890cca75dbd664bd50118a310f1dac3a28f1f8482b22197c453"} err="failed to get container status \"f6754322fdaeb890cca75dbd664bd50118a310f1dac3a28f1f8482b22197c453\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6754322fdaeb890cca75dbd664bd50118a310f1dac3a28f1f8482b22197c453\": not found" Feb 13 15:11:13.093173 kubelet[3210]: I0213 15:11:13.093147 3210 scope.go:117] "RemoveContainer" containerID="9569b2b27324856a00954ef1e26f56d485c9f9ad40898aba8193294513f1eaa2" Feb 13 15:11:13.093745 containerd[1955]: time="2025-02-13T15:11:13.093484994Z" level=error msg="ContainerStatus for \"9569b2b27324856a00954ef1e26f56d485c9f9ad40898aba8193294513f1eaa2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9569b2b27324856a00954ef1e26f56d485c9f9ad40898aba8193294513f1eaa2\": not found" Feb 13 15:11:13.094161 kubelet[3210]: E0213 15:11:13.094107 3210 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9569b2b27324856a00954ef1e26f56d485c9f9ad40898aba8193294513f1eaa2\": not found" containerID="9569b2b27324856a00954ef1e26f56d485c9f9ad40898aba8193294513f1eaa2" Feb 13 15:11:13.094387 kubelet[3210]: I0213 15:11:13.094185 3210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9569b2b27324856a00954ef1e26f56d485c9f9ad40898aba8193294513f1eaa2"} err="failed to get container status \"9569b2b27324856a00954ef1e26f56d485c9f9ad40898aba8193294513f1eaa2\": rpc error: code = NotFound desc = an error occurred when try to find container \"9569b2b27324856a00954ef1e26f56d485c9f9ad40898aba8193294513f1eaa2\": not found" Feb 13 15:11:13.094387 kubelet[3210]: I0213 15:11:13.094245 3210 scope.go:117] "RemoveContainer" containerID="8155bca9f6aa3eacd1a6fad8a5e04862f1413c811a93cd831addc5a928d1ab04" Feb 13 15:11:13.094748 containerd[1955]: time="2025-02-13T15:11:13.094697546Z" level=error msg="ContainerStatus for \"8155bca9f6aa3eacd1a6fad8a5e04862f1413c811a93cd831addc5a928d1ab04\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8155bca9f6aa3eacd1a6fad8a5e04862f1413c811a93cd831addc5a928d1ab04\": not found" Feb 13 15:11:13.095008 kubelet[3210]: E0213 15:11:13.094926 3210 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8155bca9f6aa3eacd1a6fad8a5e04862f1413c811a93cd831addc5a928d1ab04\": not found" containerID="8155bca9f6aa3eacd1a6fad8a5e04862f1413c811a93cd831addc5a928d1ab04" Feb 13 15:11:13.095113 kubelet[3210]: I0213 15:11:13.094985 3210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8155bca9f6aa3eacd1a6fad8a5e04862f1413c811a93cd831addc5a928d1ab04"} err="failed to get container status \"8155bca9f6aa3eacd1a6fad8a5e04862f1413c811a93cd831addc5a928d1ab04\": rpc error: code = NotFound desc = an error occurred when try to find container \"8155bca9f6aa3eacd1a6fad8a5e04862f1413c811a93cd831addc5a928d1ab04\": not found" Feb 13 15:11:13.095113 kubelet[3210]: I0213 15:11:13.095043 3210 scope.go:117] "RemoveContainer" containerID="87a7ae4d9505a5a8b9ab84fb97815f8f1ae2ca859227d5ae375d7aaffeb9e0c0" Feb 13 15:11:13.095717 containerd[1955]: time="2025-02-13T15:11:13.095448842Z" level=error msg="ContainerStatus for \"87a7ae4d9505a5a8b9ab84fb97815f8f1ae2ca859227d5ae375d7aaffeb9e0c0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"87a7ae4d9505a5a8b9ab84fb97815f8f1ae2ca859227d5ae375d7aaffeb9e0c0\": not found" Feb 13 15:11:13.095986 kubelet[3210]: E0213 15:11:13.095752 3210 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"87a7ae4d9505a5a8b9ab84fb97815f8f1ae2ca859227d5ae375d7aaffeb9e0c0\": not found" containerID="87a7ae4d9505a5a8b9ab84fb97815f8f1ae2ca859227d5ae375d7aaffeb9e0c0" Feb 13 15:11:13.095986 kubelet[3210]: I0213 15:11:13.095854 3210 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"87a7ae4d9505a5a8b9ab84fb97815f8f1ae2ca859227d5ae375d7aaffeb9e0c0"} err="failed to get container status \"87a7ae4d9505a5a8b9ab84fb97815f8f1ae2ca859227d5ae375d7aaffeb9e0c0\": rpc error: code = NotFound desc = an error occurred when try to find container \"87a7ae4d9505a5a8b9ab84fb97815f8f1ae2ca859227d5ae375d7aaffeb9e0c0\": not found" Feb 13 15:11:13.439983 kubelet[3210]: I0213 15:11:13.438970 3210 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="368bec3b-1909-4277-a6d5-89daa02ed593" path="/var/lib/kubelet/pods/368bec3b-1909-4277-a6d5-89daa02ed593/volumes" Feb 13 15:11:13.440571 kubelet[3210]: I0213 15:11:13.440511 3210 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b40c1a18-6a44-4b15-8ecc-b6cba91f498e" path="/var/lib/kubelet/pods/b40c1a18-6a44-4b15-8ecc-b6cba91f498e/volumes" Feb 13 15:11:13.463203 containerd[1955]: time="2025-02-13T15:11:13.463157176Z" level=info msg="StopPodSandbox for \"1b66c51ce28b4b25f01d3c6a6916daa128893904596aa004d013a88182c2ddd7\"" Feb 13 15:11:13.463614 containerd[1955]: time="2025-02-13T15:11:13.463582780Z" level=info msg="TearDown network for sandbox \"1b66c51ce28b4b25f01d3c6a6916daa128893904596aa004d013a88182c2ddd7\" successfully" Feb 13 15:11:13.463768 containerd[1955]: time="2025-02-13T15:11:13.463742332Z" level=info msg="StopPodSandbox for \"1b66c51ce28b4b25f01d3c6a6916daa128893904596aa004d013a88182c2ddd7\" returns successfully" Feb 13 15:11:13.464760 containerd[1955]: time="2025-02-13T15:11:13.464660920Z" level=info msg="RemovePodSandbox for \"1b66c51ce28b4b25f01d3c6a6916daa128893904596aa004d013a88182c2ddd7\"" Feb 13 15:11:13.464760 containerd[1955]: time="2025-02-13T15:11:13.464759212Z" level=info msg="Forcibly stopping sandbox \"1b66c51ce28b4b25f01d3c6a6916daa128893904596aa004d013a88182c2ddd7\"" Feb 13 15:11:13.464937 containerd[1955]: time="2025-02-13T15:11:13.464889592Z" level=info msg="TearDown network for sandbox \"1b66c51ce28b4b25f01d3c6a6916daa128893904596aa004d013a88182c2ddd7\" successfully" Feb 13 15:11:13.470205 containerd[1955]: time="2025-02-13T15:11:13.470115232Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b66c51ce28b4b25f01d3c6a6916daa128893904596aa004d013a88182c2ddd7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:13.470369 containerd[1955]: time="2025-02-13T15:11:13.470221420Z" level=info msg="RemovePodSandbox \"1b66c51ce28b4b25f01d3c6a6916daa128893904596aa004d013a88182c2ddd7\" returns successfully" Feb 13 15:11:13.471314 containerd[1955]: time="2025-02-13T15:11:13.471085372Z" level=info msg="StopPodSandbox for \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\"" Feb 13 15:11:13.471314 containerd[1955]: time="2025-02-13T15:11:13.471219832Z" level=info msg="TearDown network for sandbox \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\" successfully" Feb 13 15:11:13.471314 containerd[1955]: time="2025-02-13T15:11:13.471245068Z" level=info msg="StopPodSandbox for \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\" returns successfully" Feb 13 15:11:13.473017 containerd[1955]: time="2025-02-13T15:11:13.472025320Z" level=info msg="RemovePodSandbox for \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\"" Feb 13 15:11:13.473017 containerd[1955]: time="2025-02-13T15:11:13.472081504Z" level=info msg="Forcibly stopping sandbox \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\"" Feb 13 15:11:13.473017 containerd[1955]: time="2025-02-13T15:11:13.472182112Z" level=info msg="TearDown network for sandbox \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\" successfully" Feb 13 15:11:13.477488 containerd[1955]: time="2025-02-13T15:11:13.477403732Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:11:13.477607 containerd[1955]: time="2025-02-13T15:11:13.477493756Z" level=info msg="RemovePodSandbox \"f5f3768e98daa21d932bcc6543cac32e8390dcb0d7f54a95209973c49834c4aa\" returns successfully" Feb 13 15:11:13.705131 kubelet[3210]: E0213 15:11:13.704772 3210 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:11:13.798275 sshd[5051]: Connection closed by 139.178.68.195 port 59030 Feb 13 15:11:13.799634 sshd-session[5049]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:13.807382 systemd[1]: sshd@25-172.31.30.142:22-139.178.68.195:59030.service: Deactivated successfully. Feb 13 15:11:13.812530 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:11:13.813178 systemd[1]: session-26.scope: Consumed 3.107s CPU time, 23.7M memory peak. Feb 13 15:11:13.814458 systemd-logind[1931]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:11:13.816686 systemd-logind[1931]: Removed session 26. Feb 13 15:11:13.844023 systemd[1]: Started sshd@26-172.31.30.142:22-139.178.68.195:59032.service - OpenSSH per-connection server daemon (139.178.68.195:59032). Feb 13 15:11:14.022783 sshd[5215]: Accepted publickey for core from 139.178.68.195 port 59032 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:11:14.026786 sshd-session[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:14.036241 systemd-logind[1931]: New session 27 of user core. Feb 13 15:11:14.045321 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:11:14.181439 ntpd[1926]: Deleting interface #12 lxc_health, fe80::c043:adff:fec9:900b%8#123, interface stats: received=0, sent=0, dropped=0, active_time=83 secs Feb 13 15:11:14.181910 ntpd[1926]: 13 Feb 15:11:14 ntpd[1926]: Deleting interface #12 lxc_health, fe80::c043:adff:fec9:900b%8#123, interface stats: received=0, sent=0, dropped=0, active_time=83 secs Feb 13 15:11:15.322167 sshd[5217]: Connection closed by 139.178.68.195 port 59032 Feb 13 15:11:15.325274 sshd-session[5215]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:15.334316 kubelet[3210]: E0213 15:11:15.334250 3210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="368bec3b-1909-4277-a6d5-89daa02ed593" containerName="mount-cgroup" Feb 13 15:11:15.339118 kubelet[3210]: E0213 15:11:15.335060 3210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="368bec3b-1909-4277-a6d5-89daa02ed593" containerName="apply-sysctl-overwrites" Feb 13 15:11:15.339118 kubelet[3210]: E0213 15:11:15.335130 3210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="368bec3b-1909-4277-a6d5-89daa02ed593" containerName="mount-bpf-fs" Feb 13 15:11:15.339118 kubelet[3210]: E0213 15:11:15.335150 3210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="368bec3b-1909-4277-a6d5-89daa02ed593" containerName="clean-cilium-state" Feb 13 15:11:15.339118 kubelet[3210]: E0213 15:11:15.335167 3210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b40c1a18-6a44-4b15-8ecc-b6cba91f498e" containerName="cilium-operator" Feb 13 15:11:15.339118 kubelet[3210]: E0213 15:11:15.335217 3210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="368bec3b-1909-4277-a6d5-89daa02ed593" containerName="cilium-agent" Feb 13 15:11:15.339118 kubelet[3210]: I0213 15:11:15.335333 3210 memory_manager.go:354] "RemoveStaleState removing state" podUID="b40c1a18-6a44-4b15-8ecc-b6cba91f498e" containerName="cilium-operator" Feb 13 15:11:15.339118 kubelet[3210]: I0213 15:11:15.335356 3210 memory_manager.go:354] "RemoveStaleState removing state" podUID="368bec3b-1909-4277-a6d5-89daa02ed593" containerName="cilium-agent" Feb 13 15:11:15.336359 systemd[1]: sshd@26-172.31.30.142:22-139.178.68.195:59032.service: Deactivated successfully. Feb 13 15:11:15.347735 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:11:15.348342 systemd[1]: session-27.scope: Consumed 1.068s CPU time, 23.7M memory peak. Feb 13 15:11:15.374515 systemd-logind[1931]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:11:15.382090 kubelet[3210]: I0213 15:11:15.382017 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ded374db-f900-4f04-adfb-ff082caec420-xtables-lock\") pod \"cilium-f94gq\" (UID: \"ded374db-f900-4f04-adfb-ff082caec420\") " pod="kube-system/cilium-f94gq" Feb 13 15:11:15.382223 kubelet[3210]: I0213 15:11:15.382096 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ded374db-f900-4f04-adfb-ff082caec420-cilium-ipsec-secrets\") pod \"cilium-f94gq\" (UID: \"ded374db-f900-4f04-adfb-ff082caec420\") " pod="kube-system/cilium-f94gq" Feb 13 15:11:15.382223 kubelet[3210]: I0213 15:11:15.382141 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ded374db-f900-4f04-adfb-ff082caec420-host-proc-sys-net\") pod \"cilium-f94gq\" (UID: \"ded374db-f900-4f04-adfb-ff082caec420\") " pod="kube-system/cilium-f94gq" Feb 13 15:11:15.382223 kubelet[3210]: I0213 15:11:15.382186 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ded374db-f900-4f04-adfb-ff082caec420-hostproc\") pod \"cilium-f94gq\" (UID: \"ded374db-f900-4f04-adfb-ff082caec420\") " pod="kube-system/cilium-f94gq" Feb 13 15:11:15.382417 kubelet[3210]: I0213 15:11:15.382227 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ded374db-f900-4f04-adfb-ff082caec420-cilium-config-path\") pod \"cilium-f94gq\" (UID: \"ded374db-f900-4f04-adfb-ff082caec420\") " pod="kube-system/cilium-f94gq" Feb 13 15:11:15.382417 kubelet[3210]: I0213 15:11:15.382272 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ded374db-f900-4f04-adfb-ff082caec420-cilium-run\") pod \"cilium-f94gq\" (UID: \"ded374db-f900-4f04-adfb-ff082caec420\") " pod="kube-system/cilium-f94gq" Feb 13 15:11:15.382417 kubelet[3210]: I0213 15:11:15.382308 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ded374db-f900-4f04-adfb-ff082caec420-cilium-cgroup\") pod \"cilium-f94gq\" (UID: \"ded374db-f900-4f04-adfb-ff082caec420\") " pod="kube-system/cilium-f94gq" Feb 13 15:11:15.382417 kubelet[3210]: I0213 15:11:15.382346 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ded374db-f900-4f04-adfb-ff082caec420-lib-modules\") pod \"cilium-f94gq\" (UID: \"ded374db-f900-4f04-adfb-ff082caec420\") " pod="kube-system/cilium-f94gq" Feb 13 15:11:15.382417 kubelet[3210]: I0213 15:11:15.382385 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ded374db-f900-4f04-adfb-ff082caec420-bpf-maps\") pod \"cilium-f94gq\" (UID: \"ded374db-f900-4f04-adfb-ff082caec420\") " pod="kube-system/cilium-f94gq" Feb 13 15:11:15.382673 kubelet[3210]: I0213 15:11:15.382420 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ded374db-f900-4f04-adfb-ff082caec420-cni-path\") pod \"cilium-f94gq\" (UID: \"ded374db-f900-4f04-adfb-ff082caec420\") " pod="kube-system/cilium-f94gq" Feb 13 15:11:15.382673 kubelet[3210]: I0213 15:11:15.382460 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtcjq\" (UniqueName: \"kubernetes.io/projected/ded374db-f900-4f04-adfb-ff082caec420-kube-api-access-rtcjq\") pod \"cilium-f94gq\" (UID: \"ded374db-f900-4f04-adfb-ff082caec420\") " pod="kube-system/cilium-f94gq" Feb 13 15:11:15.382673 kubelet[3210]: I0213 15:11:15.382511 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ded374db-f900-4f04-adfb-ff082caec420-host-proc-sys-kernel\") pod \"cilium-f94gq\" (UID: \"ded374db-f900-4f04-adfb-ff082caec420\") " pod="kube-system/cilium-f94gq" Feb 13 15:11:15.382673 kubelet[3210]: I0213 15:11:15.382546 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ded374db-f900-4f04-adfb-ff082caec420-hubble-tls\") pod \"cilium-f94gq\" (UID: \"ded374db-f900-4f04-adfb-ff082caec420\") " pod="kube-system/cilium-f94gq" Feb 13 15:11:15.382673 kubelet[3210]: I0213 15:11:15.382584 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ded374db-f900-4f04-adfb-ff082caec420-clustermesh-secrets\") pod \"cilium-f94gq\" (UID: \"ded374db-f900-4f04-adfb-ff082caec420\") " pod="kube-system/cilium-f94gq" Feb 13 15:11:15.382673 kubelet[3210]: I0213 15:11:15.382622 3210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ded374db-f900-4f04-adfb-ff082caec420-etc-cni-netd\") pod \"cilium-f94gq\" (UID: \"ded374db-f900-4f04-adfb-ff082caec420\") " pod="kube-system/cilium-f94gq" Feb 13 15:11:15.388608 systemd[1]: Started sshd@27-172.31.30.142:22-139.178.68.195:59040.service - OpenSSH per-connection server daemon (139.178.68.195:59040). Feb 13 15:11:15.393152 systemd-logind[1931]: Removed session 27. Feb 13 15:11:15.422681 systemd[1]: Created slice kubepods-burstable-podded374db_f900_4f04_adfb_ff082caec420.slice - libcontainer container kubepods-burstable-podded374db_f900_4f04_adfb_ff082caec420.slice. Feb 13 15:11:15.654210 sshd[5227]: Accepted publickey for core from 139.178.68.195 port 59040 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:11:15.660170 sshd-session[5227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:15.674362 systemd-logind[1931]: New session 28 of user core. Feb 13 15:11:15.688071 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:11:15.748191 containerd[1955]: time="2025-02-13T15:11:15.748116031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f94gq,Uid:ded374db-f900-4f04-adfb-ff082caec420,Namespace:kube-system,Attempt:0,}" Feb 13 15:11:15.790807 containerd[1955]: time="2025-02-13T15:11:15.790528027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:11:15.791154 containerd[1955]: time="2025-02-13T15:11:15.791076907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:11:15.792986 containerd[1955]: time="2025-02-13T15:11:15.791204935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:15.793283 containerd[1955]: time="2025-02-13T15:11:15.793120147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:11:15.819463 sshd[5234]: Connection closed by 139.178.68.195 port 59040 Feb 13 15:11:15.821744 sshd-session[5227]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:15.824282 systemd[1]: Started cri-containerd-117fd26dfa8fbc8440282434674a5b7a93a6ed48485b7b5393276cce314bc22d.scope - libcontainer container 117fd26dfa8fbc8440282434674a5b7a93a6ed48485b7b5393276cce314bc22d. Feb 13 15:11:15.831844 systemd[1]: sshd@27-172.31.30.142:22-139.178.68.195:59040.service: Deactivated successfully. Feb 13 15:11:15.843071 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:11:15.847063 systemd-logind[1931]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:11:15.868495 systemd[1]: Started sshd@28-172.31.30.142:22-139.178.68.195:59056.service - OpenSSH per-connection server daemon (139.178.68.195:59056). Feb 13 15:11:15.872244 systemd-logind[1931]: Removed session 28. Feb 13 15:11:15.917712 containerd[1955]: time="2025-02-13T15:11:15.917491532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f94gq,Uid:ded374db-f900-4f04-adfb-ff082caec420,Namespace:kube-system,Attempt:0,} returns sandbox id \"117fd26dfa8fbc8440282434674a5b7a93a6ed48485b7b5393276cce314bc22d\"" Feb 13 15:11:15.926339 containerd[1955]: time="2025-02-13T15:11:15.925657472Z" level=info msg="CreateContainer within sandbox \"117fd26dfa8fbc8440282434674a5b7a93a6ed48485b7b5393276cce314bc22d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:11:15.968980 containerd[1955]: time="2025-02-13T15:11:15.968894228Z" level=info msg="CreateContainer within sandbox \"117fd26dfa8fbc8440282434674a5b7a93a6ed48485b7b5393276cce314bc22d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a7cf35d46ba6a346ae853915f3078e239f48b8a38619f952270c140f9583801a\"" Feb 13 15:11:15.969999 containerd[1955]: time="2025-02-13T15:11:15.969790100Z" level=info msg="StartContainer for \"a7cf35d46ba6a346ae853915f3078e239f48b8a38619f952270c140f9583801a\"" Feb 13 15:11:16.025247 systemd[1]: Started cri-containerd-a7cf35d46ba6a346ae853915f3078e239f48b8a38619f952270c140f9583801a.scope - libcontainer container a7cf35d46ba6a346ae853915f3078e239f48b8a38619f952270c140f9583801a. Feb 13 15:11:16.080985 containerd[1955]: time="2025-02-13T15:11:16.080621165Z" level=info msg="StartContainer for \"a7cf35d46ba6a346ae853915f3078e239f48b8a38619f952270c140f9583801a\" returns successfully" Feb 13 15:11:16.086452 sshd[5273]: Accepted publickey for core from 139.178.68.195 port 59056 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:11:16.091182 sshd-session[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:11:16.105908 systemd-logind[1931]: New session 29 of user core. Feb 13 15:11:16.113308 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 15:11:16.117182 systemd[1]: cri-containerd-a7cf35d46ba6a346ae853915f3078e239f48b8a38619f952270c140f9583801a.scope: Deactivated successfully. Feb 13 15:11:16.176891 containerd[1955]: time="2025-02-13T15:11:16.176791913Z" level=info msg="shim disconnected" id=a7cf35d46ba6a346ae853915f3078e239f48b8a38619f952270c140f9583801a namespace=k8s.io Feb 13 15:11:16.176891 containerd[1955]: time="2025-02-13T15:11:16.176873417Z" level=warning msg="cleaning up after shim disconnected" id=a7cf35d46ba6a346ae853915f3078e239f48b8a38619f952270c140f9583801a namespace=k8s.io Feb 13 15:11:16.176891 containerd[1955]: time="2025-02-13T15:11:16.176895737Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:16.644424 kubelet[3210]: I0213 15:11:16.643565 3210 setters.go:600] "Node became not ready" node="ip-172-31-30-142" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:11:16Z","lastTransitionTime":"2025-02-13T15:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:11:17.029196 containerd[1955]: time="2025-02-13T15:11:17.029120417Z" level=info msg="CreateContainer within sandbox \"117fd26dfa8fbc8440282434674a5b7a93a6ed48485b7b5393276cce314bc22d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:11:17.058398 containerd[1955]: time="2025-02-13T15:11:17.058319238Z" level=info msg="CreateContainer within sandbox \"117fd26dfa8fbc8440282434674a5b7a93a6ed48485b7b5393276cce314bc22d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"03b0daf5d4dac070ad002b92cc339cd9b6d51146c79c12bc10d4cc5103ac55b3\"" Feb 13 15:11:17.060667 containerd[1955]: time="2025-02-13T15:11:17.060493026Z" level=info msg="StartContainer for \"03b0daf5d4dac070ad002b92cc339cd9b6d51146c79c12bc10d4cc5103ac55b3\"" Feb 13 15:11:17.126255 systemd[1]: Started cri-containerd-03b0daf5d4dac070ad002b92cc339cd9b6d51146c79c12bc10d4cc5103ac55b3.scope - libcontainer container 03b0daf5d4dac070ad002b92cc339cd9b6d51146c79c12bc10d4cc5103ac55b3. Feb 13 15:11:17.171115 containerd[1955]: time="2025-02-13T15:11:17.171036954Z" level=info msg="StartContainer for \"03b0daf5d4dac070ad002b92cc339cd9b6d51146c79c12bc10d4cc5103ac55b3\" returns successfully" Feb 13 15:11:17.186010 systemd[1]: cri-containerd-03b0daf5d4dac070ad002b92cc339cd9b6d51146c79c12bc10d4cc5103ac55b3.scope: Deactivated successfully. Feb 13 15:11:17.245192 containerd[1955]: time="2025-02-13T15:11:17.245065747Z" level=info msg="shim disconnected" id=03b0daf5d4dac070ad002b92cc339cd9b6d51146c79c12bc10d4cc5103ac55b3 namespace=k8s.io Feb 13 15:11:17.245192 containerd[1955]: time="2025-02-13T15:11:17.245156143Z" level=warning msg="cleaning up after shim disconnected" id=03b0daf5d4dac070ad002b92cc339cd9b6d51146c79c12bc10d4cc5103ac55b3 namespace=k8s.io Feb 13 15:11:17.245192 containerd[1955]: time="2025-02-13T15:11:17.245178499Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:17.500721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03b0daf5d4dac070ad002b92cc339cd9b6d51146c79c12bc10d4cc5103ac55b3-rootfs.mount: Deactivated successfully. Feb 13 15:11:18.038182 containerd[1955]: time="2025-02-13T15:11:18.037889718Z" level=info msg="CreateContainer within sandbox \"117fd26dfa8fbc8440282434674a5b7a93a6ed48485b7b5393276cce314bc22d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:11:18.074271 containerd[1955]: time="2025-02-13T15:11:18.074047783Z" level=info msg="CreateContainer within sandbox \"117fd26dfa8fbc8440282434674a5b7a93a6ed48485b7b5393276cce314bc22d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dc0499661e0f5243bd5d8c04e7ac31d3cc2ce504b56ca18e821f851373de2fa1\"" Feb 13 15:11:18.077175 containerd[1955]: time="2025-02-13T15:11:18.077091115Z" level=info msg="StartContainer for \"dc0499661e0f5243bd5d8c04e7ac31d3cc2ce504b56ca18e821f851373de2fa1\"" Feb 13 15:11:18.157286 systemd[1]: Started cri-containerd-dc0499661e0f5243bd5d8c04e7ac31d3cc2ce504b56ca18e821f851373de2fa1.scope - libcontainer container dc0499661e0f5243bd5d8c04e7ac31d3cc2ce504b56ca18e821f851373de2fa1. Feb 13 15:11:18.223910 containerd[1955]: time="2025-02-13T15:11:18.223825699Z" level=info msg="StartContainer for \"dc0499661e0f5243bd5d8c04e7ac31d3cc2ce504b56ca18e821f851373de2fa1\" returns successfully" Feb 13 15:11:18.228741 systemd[1]: cri-containerd-dc0499661e0f5243bd5d8c04e7ac31d3cc2ce504b56ca18e821f851373de2fa1.scope: Deactivated successfully. Feb 13 15:11:18.288127 containerd[1955]: time="2025-02-13T15:11:18.287936924Z" level=info msg="shim disconnected" id=dc0499661e0f5243bd5d8c04e7ac31d3cc2ce504b56ca18e821f851373de2fa1 namespace=k8s.io Feb 13 15:11:18.288127 containerd[1955]: time="2025-02-13T15:11:18.288101168Z" level=warning msg="cleaning up after shim disconnected" id=dc0499661e0f5243bd5d8c04e7ac31d3cc2ce504b56ca18e821f851373de2fa1 namespace=k8s.io Feb 13 15:11:18.290313 containerd[1955]: time="2025-02-13T15:11:18.288146432Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:18.500999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc0499661e0f5243bd5d8c04e7ac31d3cc2ce504b56ca18e821f851373de2fa1-rootfs.mount: Deactivated successfully. Feb 13 15:11:18.706715 kubelet[3210]: E0213 15:11:18.706648 3210 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:11:19.045632 containerd[1955]: time="2025-02-13T15:11:19.045379735Z" level=info msg="CreateContainer within sandbox \"117fd26dfa8fbc8440282434674a5b7a93a6ed48485b7b5393276cce314bc22d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:11:19.075797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount948807976.mount: Deactivated successfully. Feb 13 15:11:19.079179 containerd[1955]: time="2025-02-13T15:11:19.077425880Z" level=info msg="CreateContainer within sandbox \"117fd26dfa8fbc8440282434674a5b7a93a6ed48485b7b5393276cce314bc22d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"43f50fc7a55ee7a2b5817e93eb24d5294c96cbd65bcdc63a7e58337d710e5a24\"" Feb 13 15:11:19.082472 containerd[1955]: time="2025-02-13T15:11:19.080402024Z" level=info msg="StartContainer for \"43f50fc7a55ee7a2b5817e93eb24d5294c96cbd65bcdc63a7e58337d710e5a24\"" Feb 13 15:11:19.145265 systemd[1]: Started cri-containerd-43f50fc7a55ee7a2b5817e93eb24d5294c96cbd65bcdc63a7e58337d710e5a24.scope - libcontainer container 43f50fc7a55ee7a2b5817e93eb24d5294c96cbd65bcdc63a7e58337d710e5a24. Feb 13 15:11:19.195306 systemd[1]: cri-containerd-43f50fc7a55ee7a2b5817e93eb24d5294c96cbd65bcdc63a7e58337d710e5a24.scope: Deactivated successfully. Feb 13 15:11:19.200226 containerd[1955]: time="2025-02-13T15:11:19.199583384Z" level=info msg="StartContainer for \"43f50fc7a55ee7a2b5817e93eb24d5294c96cbd65bcdc63a7e58337d710e5a24\" returns successfully" Feb 13 15:11:19.246350 containerd[1955]: time="2025-02-13T15:11:19.246250604Z" level=info msg="shim disconnected" id=43f50fc7a55ee7a2b5817e93eb24d5294c96cbd65bcdc63a7e58337d710e5a24 namespace=k8s.io Feb 13 15:11:19.246350 containerd[1955]: time="2025-02-13T15:11:19.246326336Z" level=warning msg="cleaning up after shim disconnected" id=43f50fc7a55ee7a2b5817e93eb24d5294c96cbd65bcdc63a7e58337d710e5a24 namespace=k8s.io Feb 13 15:11:19.246350 containerd[1955]: time="2025-02-13T15:11:19.246347408Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:19.502633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43f50fc7a55ee7a2b5817e93eb24d5294c96cbd65bcdc63a7e58337d710e5a24-rootfs.mount: Deactivated successfully. Feb 13 15:11:20.052002 containerd[1955]: time="2025-02-13T15:11:20.051204548Z" level=info msg="CreateContainer within sandbox \"117fd26dfa8fbc8440282434674a5b7a93a6ed48485b7b5393276cce314bc22d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:11:20.083194 containerd[1955]: time="2025-02-13T15:11:20.082827633Z" level=info msg="CreateContainer within sandbox \"117fd26dfa8fbc8440282434674a5b7a93a6ed48485b7b5393276cce314bc22d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"97955505c00e6c708a9d083a0c1665addb228cc6f3e920b7868a617e0bf5c336\"" Feb 13 15:11:20.086392 containerd[1955]: time="2025-02-13T15:11:20.086328153Z" level=info msg="StartContainer for \"97955505c00e6c708a9d083a0c1665addb228cc6f3e920b7868a617e0bf5c336\"" Feb 13 15:11:20.156277 systemd[1]: Started cri-containerd-97955505c00e6c708a9d083a0c1665addb228cc6f3e920b7868a617e0bf5c336.scope - libcontainer container 97955505c00e6c708a9d083a0c1665addb228cc6f3e920b7868a617e0bf5c336. Feb 13 15:11:20.222365 containerd[1955]: time="2025-02-13T15:11:20.222264885Z" level=info msg="StartContainer for \"97955505c00e6c708a9d083a0c1665addb228cc6f3e920b7868a617e0bf5c336\" returns successfully" Feb 13 15:11:21.053015 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 15:11:25.558208 systemd-networkd[1872]: lxc_health: Link UP Feb 13 15:11:25.562624 (udev-worker)[6079]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:11:25.567681 systemd-networkd[1872]: lxc_health: Gained carrier Feb 13 15:11:25.799663 kubelet[3210]: I0213 15:11:25.799580 3210 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f94gq" podStartSLOduration=10.799556909 podStartE2EDuration="10.799556909s" podCreationTimestamp="2025-02-13 15:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:11:21.128501086 +0000 UTC m=+127.973617069" watchObservedRunningTime="2025-02-13 15:11:25.799556909 +0000 UTC m=+132.644672868" Feb 13 15:11:26.989694 systemd-networkd[1872]: lxc_health: Gained IPv6LL Feb 13 15:11:27.411541 systemd[1]: run-containerd-runc-k8s.io-97955505c00e6c708a9d083a0c1665addb228cc6f3e920b7868a617e0bf5c336-runc.1R8qdG.mount: Deactivated successfully. Feb 13 15:11:29.180867 ntpd[1926]: Listen normally on 15 lxc_health [fe80::10a0:fcff:fe79:ff35%14]:123 Feb 13 15:11:29.182119 ntpd[1926]: 13 Feb 15:11:29 ntpd[1926]: Listen normally on 15 lxc_health [fe80::10a0:fcff:fe79:ff35%14]:123 Feb 13 15:11:32.142760 sshd[5320]: Connection closed by 139.178.68.195 port 59056 Feb 13 15:11:32.144003 sshd-session[5273]: pam_unix(sshd:session): session closed for user core Feb 13 15:11:32.152932 systemd-logind[1931]: Session 29 logged out. Waiting for processes to exit. Feb 13 15:11:32.153753 systemd[1]: sshd@28-172.31.30.142:22-139.178.68.195:59056.service: Deactivated successfully. Feb 13 15:11:32.162089 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 15:11:32.169938 systemd-logind[1931]: Removed session 29. Feb 13 15:11:32.973504 update_engine[1932]: I20250213 15:11:32.973363 1932 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 15:11:32.973504 update_engine[1932]: I20250213 15:11:32.973489 1932 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 15:11:32.974199 update_engine[1932]: I20250213 15:11:32.973824 1932 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 15:11:32.975113 update_engine[1932]: I20250213 15:11:32.974742 1932 omaha_request_params.cc:62] Current group set to alpha Feb 13 15:11:32.975297 update_engine[1932]: I20250213 15:11:32.975263 1932 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 15:11:32.975374 update_engine[1932]: I20250213 15:11:32.975297 1932 update_attempter.cc:643] Scheduling an action processor start. Feb 13 15:11:32.975374 update_engine[1932]: I20250213 15:11:32.975342 1932 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 15:11:32.975459 update_engine[1932]: I20250213 15:11:32.975430 1932 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 15:11:32.975589 update_engine[1932]: I20250213 15:11:32.975541 1932 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 15:11:32.975589 update_engine[1932]: I20250213 15:11:32.975571 1932 omaha_request_action.cc:272] Request: Feb 13 15:11:32.975589 update_engine[1932]: Feb 13 15:11:32.975589 update_engine[1932]: Feb 13 15:11:32.975589 update_engine[1932]: Feb 13 15:11:32.975589 update_engine[1932]: Feb 13 15:11:32.975589 update_engine[1932]: Feb 13 15:11:32.975589 update_engine[1932]: Feb 13 15:11:32.975589 update_engine[1932]: Feb 13 15:11:32.975589 update_engine[1932]: Feb 13 15:11:32.976195 update_engine[1932]: I20250213 15:11:32.975593 1932 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:11:32.980002 locksmithd[1971]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 15:11:32.980478 update_engine[1932]: I20250213 15:11:32.979346 1932 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:11:32.980478 update_engine[1932]: I20250213 15:11:32.980144 1932 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:11:33.011171 update_engine[1932]: E20250213 15:11:33.011075 1932 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:11:33.011325 update_engine[1932]: I20250213 15:11:33.011227 1932 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 15:11:42.976370 update_engine[1932]: I20250213 15:11:42.976270 1932 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:11:42.976926 update_engine[1932]: I20250213 15:11:42.976640 1932 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:11:42.977071 update_engine[1932]: I20250213 15:11:42.977043 1932 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:11:42.977637 update_engine[1932]: E20250213 15:11:42.977577 1932 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:11:42.977711 update_engine[1932]: I20250213 15:11:42.977670 1932 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 15:11:46.350104 kubelet[3210]: E0213 15:11:46.349048 3210 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-142?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 15:11:47.552802 systemd[1]: cri-containerd-97fb93bd37e5488b24ee663e9a6cf330ee327b9e91bf87adf40c16c8583712bf.scope: Deactivated successfully. Feb 13 15:11:47.554225 systemd[1]: cri-containerd-97fb93bd37e5488b24ee663e9a6cf330ee327b9e91bf87adf40c16c8583712bf.scope: Consumed 4.364s CPU time, 55.4M memory peak. Feb 13 15:11:47.595435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97fb93bd37e5488b24ee663e9a6cf330ee327b9e91bf87adf40c16c8583712bf-rootfs.mount: Deactivated successfully. Feb 13 15:11:47.609018 containerd[1955]: time="2025-02-13T15:11:47.608909977Z" level=info msg="shim disconnected" id=97fb93bd37e5488b24ee663e9a6cf330ee327b9e91bf87adf40c16c8583712bf namespace=k8s.io Feb 13 15:11:47.609018 containerd[1955]: time="2025-02-13T15:11:47.609014845Z" level=warning msg="cleaning up after shim disconnected" id=97fb93bd37e5488b24ee663e9a6cf330ee327b9e91bf87adf40c16c8583712bf namespace=k8s.io Feb 13 15:11:47.609694 containerd[1955]: time="2025-02-13T15:11:47.609036277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:47.630386 containerd[1955]: time="2025-02-13T15:11:47.630306037Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:11:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:11:48.147810 kubelet[3210]: I0213 15:11:48.147623 3210 scope.go:117] "RemoveContainer" containerID="97fb93bd37e5488b24ee663e9a6cf330ee327b9e91bf87adf40c16c8583712bf" Feb 13 15:11:48.151363 containerd[1955]: time="2025-02-13T15:11:48.151258116Z" level=info msg="CreateContainer within sandbox \"bcf1359f7bc7ecd46f9d831f8205e296834edf094e604d41f4f8233564cc7e15\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 15:11:48.173295 containerd[1955]: time="2025-02-13T15:11:48.173158464Z" level=info msg="CreateContainer within sandbox \"bcf1359f7bc7ecd46f9d831f8205e296834edf094e604d41f4f8233564cc7e15\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"41a3cab83b698f32e530737b52c6c9b044c6af9190ff0e77ad8c7fc9ec84b57c\"" Feb 13 15:11:48.175033 containerd[1955]: time="2025-02-13T15:11:48.174649068Z" level=info msg="StartContainer for \"41a3cab83b698f32e530737b52c6c9b044c6af9190ff0e77ad8c7fc9ec84b57c\"" Feb 13 15:11:48.237280 systemd[1]: Started cri-containerd-41a3cab83b698f32e530737b52c6c9b044c6af9190ff0e77ad8c7fc9ec84b57c.scope - libcontainer container 41a3cab83b698f32e530737b52c6c9b044c6af9190ff0e77ad8c7fc9ec84b57c. Feb 13 15:11:48.308218 containerd[1955]: time="2025-02-13T15:11:48.308147461Z" level=info msg="StartContainer for \"41a3cab83b698f32e530737b52c6c9b044c6af9190ff0e77ad8c7fc9ec84b57c\" returns successfully" Feb 13 15:11:48.594492 systemd[1]: run-containerd-runc-k8s.io-41a3cab83b698f32e530737b52c6c9b044c6af9190ff0e77ad8c7fc9ec84b57c-runc.Ul9ZVd.mount: Deactivated successfully. Feb 13 15:11:51.234800 systemd[1]: cri-containerd-7f2895107de77feaa0375b08b5eed40e9758c2ab144328c38ceba900bf75f647.scope: Deactivated successfully. Feb 13 15:11:51.235389 systemd[1]: cri-containerd-7f2895107de77feaa0375b08b5eed40e9758c2ab144328c38ceba900bf75f647.scope: Consumed 2.689s CPU time, 20.7M memory peak. Feb 13 15:11:51.278703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f2895107de77feaa0375b08b5eed40e9758c2ab144328c38ceba900bf75f647-rootfs.mount: Deactivated successfully. Feb 13 15:11:51.289455 containerd[1955]: time="2025-02-13T15:11:51.289280620Z" level=info msg="shim disconnected" id=7f2895107de77feaa0375b08b5eed40e9758c2ab144328c38ceba900bf75f647 namespace=k8s.io Feb 13 15:11:51.289455 containerd[1955]: time="2025-02-13T15:11:51.289430380Z" level=warning msg="cleaning up after shim disconnected" id=7f2895107de77feaa0375b08b5eed40e9758c2ab144328c38ceba900bf75f647 namespace=k8s.io Feb 13 15:11:51.289455 containerd[1955]: time="2025-02-13T15:11:51.289454212Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:52.165738 kubelet[3210]: I0213 15:11:52.165696 3210 scope.go:117] "RemoveContainer" containerID="7f2895107de77feaa0375b08b5eed40e9758c2ab144328c38ceba900bf75f647" Feb 13 15:11:52.168661 containerd[1955]: time="2025-02-13T15:11:52.168515548Z" level=info msg="CreateContainer within sandbox \"cfeaed686ce061f69277d488d571ec226a0a3851d46ff54239a1a488ebba2fe6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 15:11:52.215521 containerd[1955]: time="2025-02-13T15:11:52.215384944Z" level=info msg="CreateContainer within sandbox \"cfeaed686ce061f69277d488d571ec226a0a3851d46ff54239a1a488ebba2fe6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"06d510ab16a54b9c34bb46b424ff895d1b2947abeda5804824c2095687cf1492\"" Feb 13 15:11:52.216411 containerd[1955]: time="2025-02-13T15:11:52.216346672Z" level=info msg="StartContainer for \"06d510ab16a54b9c34bb46b424ff895d1b2947abeda5804824c2095687cf1492\"" Feb 13 15:11:52.273278 systemd[1]: Started cri-containerd-06d510ab16a54b9c34bb46b424ff895d1b2947abeda5804824c2095687cf1492.scope - libcontainer container 06d510ab16a54b9c34bb46b424ff895d1b2947abeda5804824c2095687cf1492. Feb 13 15:11:52.337343 containerd[1955]: time="2025-02-13T15:11:52.337255793Z" level=info msg="StartContainer for \"06d510ab16a54b9c34bb46b424ff895d1b2947abeda5804824c2095687cf1492\" returns successfully" Feb 13 15:11:52.973298 update_engine[1932]: I20250213 15:11:52.973169 1932 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:11:52.973901 update_engine[1932]: I20250213 15:11:52.973671 1932 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:11:52.974983 update_engine[1932]: I20250213 15:11:52.974171 1932 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:11:52.974983 update_engine[1932]: E20250213 15:11:52.974839 1932 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:11:52.975178 update_engine[1932]: I20250213 15:11:52.974985 1932 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 15:11:56.351109 kubelet[3210]: E0213 15:11:56.350389 3210 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-142?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 15:12:02.981052 update_engine[1932]: I20250213 15:12:02.980286 1932 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:12:02.981052 update_engine[1932]: I20250213 15:12:02.980643 1932 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:12:02.981749 update_engine[1932]: I20250213 15:12:02.981056 1932 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:12:02.981749 update_engine[1932]: E20250213 15:12:02.981446 1932 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:12:02.981749 update_engine[1932]: I20250213 15:12:02.981542 1932 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 15:12:02.981749 update_engine[1932]: I20250213 15:12:02.981562 1932 omaha_request_action.cc:617] Omaha request response: Feb 13 15:12:02.981749 update_engine[1932]: E20250213 15:12:02.981690 1932 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 15:12:02.981749 update_engine[1932]: I20250213 15:12:02.981726 1932 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 15:12:02.981749 update_engine[1932]: I20250213 15:12:02.981745 1932 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 15:12:02.982197 update_engine[1932]: I20250213 15:12:02.981760 1932 update_attempter.cc:306] Processing Done. Feb 13 15:12:02.982197 update_engine[1932]: E20250213 15:12:02.981791 1932 update_attempter.cc:619] Update failed. Feb 13 15:12:02.982197 update_engine[1932]: I20250213 15:12:02.981806 1932 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 15:12:02.982197 update_engine[1932]: I20250213 15:12:02.981822 1932 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 15:12:02.982197 update_engine[1932]: I20250213 15:12:02.981838 1932 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 15:12:02.982197 update_engine[1932]: I20250213 15:12:02.981990 1932 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 15:12:02.982197 update_engine[1932]: I20250213 15:12:02.982035 1932 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 15:12:02.982197 update_engine[1932]: I20250213 15:12:02.982054 1932 omaha_request_action.cc:272] Request: Feb 13 15:12:02.982197 update_engine[1932]: Feb 13 15:12:02.982197 update_engine[1932]: Feb 13 15:12:02.982197 update_engine[1932]: Feb 13 15:12:02.982197 update_engine[1932]: Feb 13 15:12:02.982197 update_engine[1932]: Feb 13 15:12:02.982197 update_engine[1932]: Feb 13 15:12:02.982197 update_engine[1932]: I20250213 15:12:02.982071 1932 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:12:02.982869 update_engine[1932]: I20250213 15:12:02.982345 1932 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:12:02.982869 update_engine[1932]: I20250213 15:12:02.982717 1932 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:12:02.983330 update_engine[1932]: E20250213 15:12:02.983124 1932 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:12:02.983330 update_engine[1932]: I20250213 15:12:02.983222 1932 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 15:12:02.983330 update_engine[1932]: I20250213 15:12:02.983244 1932 omaha_request_action.cc:617] Omaha request response: Feb 13 15:12:02.983330 update_engine[1932]: I20250213 15:12:02.983263 1932 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 15:12:02.983330 update_engine[1932]: I20250213 15:12:02.983279 1932 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 15:12:02.983330 update_engine[1932]: I20250213 15:12:02.983294 1932 update_attempter.cc:306] Processing Done. Feb 13 15:12:02.983330 update_engine[1932]: I20250213 15:12:02.983310 1932 update_attempter.cc:310] Error event sent. Feb 13 15:12:02.983816 update_engine[1932]: I20250213 15:12:02.983331 1932 update_check_scheduler.cc:74] Next update check in 42m12s Feb 13 15:12:02.983870 locksmithd[1971]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 15:12:02.983870 locksmithd[1971]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 15:12:06.352028 kubelet[3210]: E0213 15:12:06.351582 3210 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-142?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"