Feb 13 19:00:36.221171 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 19:00:36.221237 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 17:39:57 -00 2025 Feb 13 19:00:36.221265 kernel: KASLR disabled due to lack of seed Feb 13 19:00:36.221281 kernel: efi: EFI v2.7 by EDK II Feb 13 19:00:36.221298 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Feb 13 19:00:36.221313 kernel: secureboot: Secure boot disabled Feb 13 19:00:36.221331 kernel: ACPI: Early table checksum verification disabled Feb 13 19:00:36.221346 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 19:00:36.221362 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 19:00:36.221377 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:00:36.221398 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 19:00:36.221414 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:00:36.221429 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 19:00:36.221445 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 19:00:36.221463 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 19:00:36.221484 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:00:36.221501 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 19:00:36.221517 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 19:00:36.221534 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 19:00:36.221550 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 19:00:36.221566 kernel: printk: bootconsole [uart0] enabled Feb 13 19:00:36.221582 kernel: NUMA: Failed to initialise from firmware Feb 13 19:00:36.221599 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:00:36.221616 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 19:00:36.221632 kernel: Zone ranges: Feb 13 19:00:36.221649 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 19:00:36.221669 kernel: DMA32 empty Feb 13 19:00:36.221687 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 19:00:36.221703 kernel: Movable zone start for each node Feb 13 19:00:36.221719 kernel: Early memory node ranges Feb 13 19:00:36.221736 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 19:00:36.221752 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 19:00:36.221769 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 19:00:36.221785 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 19:00:36.221802 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 19:00:36.221818 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 19:00:36.221834 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 19:00:36.221850 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 19:00:36.221871 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:00:36.221889 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 19:00:36.221912 kernel: psci: probing for conduit method from ACPI. Feb 13 19:00:36.221930 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 19:00:36.221947 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:00:36.221969 kernel: psci: Trusted OS migration not required Feb 13 19:00:36.221987 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:00:36.222004 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:00:36.222021 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:00:36.222039 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:00:36.222056 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:00:36.222073 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:00:36.222091 kernel: CPU features: detected: Spectre-v2 Feb 13 19:00:36.222108 kernel: CPU features: detected: Spectre-v3a Feb 13 19:00:36.222126 kernel: CPU features: detected: Spectre-BHB Feb 13 19:00:36.222176 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 19:00:36.222198 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 19:00:36.222223 kernel: alternatives: applying boot alternatives Feb 13 19:00:36.222244 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:00:36.222263 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:00:36.222280 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:00:36.222299 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:00:36.222318 kernel: Fallback order for Node 0: 0 Feb 13 19:00:36.222335 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 19:00:36.222353 kernel: Policy zone: Normal Feb 13 19:00:36.222370 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:00:36.222387 kernel: software IO TLB: area num 2. Feb 13 19:00:36.222409 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 19:00:36.222428 kernel: Memory: 3821240K/4030464K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 209224K reserved, 0K cma-reserved) Feb 13 19:00:36.222445 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:00:36.222463 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:00:36.222481 kernel: rcu: RCU event tracing is enabled. Feb 13 19:00:36.222500 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:00:36.222519 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:00:36.222537 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:00:36.222555 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:00:36.222572 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:00:36.222590 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:00:36.222613 kernel: GICv3: 96 SPIs implemented Feb 13 19:00:36.222631 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:00:36.222649 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:00:36.222667 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 19:00:36.222685 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 19:00:36.222703 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 19:00:36.222722 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:00:36.222741 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:00:36.222761 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 19:00:36.222779 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 19:00:36.222797 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 19:00:36.222816 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:00:36.222839 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 19:00:36.222858 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 19:00:36.222876 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 19:00:36.222895 kernel: Console: colour dummy device 80x25 Feb 13 19:00:36.222914 kernel: printk: console [tty1] enabled Feb 13 19:00:36.222933 kernel: ACPI: Core revision 20230628 Feb 13 19:00:36.222953 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 19:00:36.222972 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:00:36.222990 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:00:36.223008 kernel: landlock: Up and running. Feb 13 19:00:36.223039 kernel: SELinux: Initializing. Feb 13 19:00:36.223058 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:00:36.223076 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:00:36.223095 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:00:36.223113 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:00:36.223130 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:00:36.223308 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:00:36.223332 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 19:00:36.223359 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 19:00:36.223379 kernel: Remapping and enabling EFI services. Feb 13 19:00:36.223398 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:00:36.223417 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:00:36.223436 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 19:00:36.223455 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 19:00:36.223474 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 19:00:36.223491 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:00:36.223509 kernel: SMP: Total of 2 processors activated. Feb 13 19:00:36.223526 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:00:36.223549 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 19:00:36.223567 kernel: CPU features: detected: CRC32 instructions Feb 13 19:00:36.223596 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:00:36.223619 kernel: alternatives: applying system-wide alternatives Feb 13 19:00:36.223637 kernel: devtmpfs: initialized Feb 13 19:00:36.223655 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:00:36.223674 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:00:36.223692 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:00:36.223710 kernel: SMBIOS 3.0.0 present. Feb 13 19:00:36.223733 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 19:00:36.223752 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:00:36.223770 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:00:36.223788 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:00:36.223807 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:00:36.223826 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:00:36.223844 kernel: audit: type=2000 audit(0.221:1): state=initialized audit_enabled=0 res=1 Feb 13 19:00:36.223867 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:00:36.223886 kernel: cpuidle: using governor menu Feb 13 19:00:36.223904 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:00:36.223922 kernel: ASID allocator initialised with 65536 entries Feb 13 19:00:36.223940 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:00:36.223959 kernel: Serial: AMBA PL011 UART driver Feb 13 19:00:36.223977 kernel: Modules: 17760 pages in range for non-PLT usage Feb 13 19:00:36.223995 kernel: Modules: 509280 pages in range for PLT usage Feb 13 19:00:36.224014 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:00:36.224037 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:00:36.224056 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:00:36.224074 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:00:36.224093 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:00:36.224111 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:00:36.224129 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:00:36.224185 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:00:36.224207 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:00:36.224226 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:00:36.224250 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:00:36.224269 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:00:36.224287 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:00:36.224306 kernel: ACPI: Interpreter enabled Feb 13 19:00:36.224324 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:00:36.224342 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:00:36.224360 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 19:00:36.224730 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:00:36.224982 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:00:36.227443 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:00:36.227675 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 19:00:36.227875 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 19:00:36.227900 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 19:00:36.227919 kernel: acpiphp: Slot [1] registered Feb 13 19:00:36.227938 kernel: acpiphp: Slot [2] registered Feb 13 19:00:36.227956 kernel: acpiphp: Slot [3] registered Feb 13 19:00:36.227984 kernel: acpiphp: Slot [4] registered Feb 13 19:00:36.228002 kernel: acpiphp: Slot [5] registered Feb 13 19:00:36.228021 kernel: acpiphp: Slot [6] registered Feb 13 19:00:36.228040 kernel: acpiphp: Slot [7] registered Feb 13 19:00:36.228058 kernel: acpiphp: Slot [8] registered Feb 13 19:00:36.228076 kernel: acpiphp: Slot [9] registered Feb 13 19:00:36.228095 kernel: acpiphp: Slot [10] registered Feb 13 19:00:36.228113 kernel: acpiphp: Slot [11] registered Feb 13 19:00:36.228132 kernel: acpiphp: Slot [12] registered Feb 13 19:00:36.228173 kernel: acpiphp: Slot [13] registered Feb 13 19:00:36.228201 kernel: acpiphp: Slot [14] registered Feb 13 19:00:36.228220 kernel: acpiphp: Slot [15] registered Feb 13 19:00:36.228238 kernel: acpiphp: Slot [16] registered Feb 13 19:00:36.228256 kernel: acpiphp: Slot [17] registered Feb 13 19:00:36.228274 kernel: acpiphp: Slot [18] registered Feb 13 19:00:36.228293 kernel: acpiphp: Slot [19] registered Feb 13 19:00:36.228311 kernel: acpiphp: Slot [20] registered Feb 13 19:00:36.228329 kernel: acpiphp: Slot [21] registered Feb 13 19:00:36.228347 kernel: acpiphp: Slot [22] registered Feb 13 19:00:36.228370 kernel: acpiphp: Slot [23] registered Feb 13 19:00:36.228389 kernel: acpiphp: Slot [24] registered Feb 13 19:00:36.228428 kernel: acpiphp: Slot [25] registered Feb 13 19:00:36.228447 kernel: acpiphp: Slot [26] registered Feb 13 19:00:36.228466 kernel: acpiphp: Slot [27] registered Feb 13 19:00:36.228484 kernel: acpiphp: Slot [28] registered Feb 13 19:00:36.228503 kernel: acpiphp: Slot [29] registered Feb 13 19:00:36.228521 kernel: acpiphp: Slot [30] registered Feb 13 19:00:36.228628 kernel: acpiphp: Slot [31] registered Feb 13 19:00:36.228653 kernel: PCI host bridge to bus 0000:00 Feb 13 19:00:36.228892 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 19:00:36.229087 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:00:36.231931 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 19:00:36.232172 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 19:00:36.232452 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 19:00:36.232703 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 19:00:36.232929 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 19:00:36.233195 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:00:36.233422 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 19:00:36.233637 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:00:36.233863 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:00:36.234073 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 19:00:36.240872 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 19:00:36.241117 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 19:00:36.241364 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:00:36.241570 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 19:00:36.241777 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 19:00:36.241985 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 19:00:36.242218 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 19:00:36.242436 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 19:00:36.242636 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 19:00:36.242835 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:00:36.243027 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 19:00:36.243052 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:00:36.243071 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:00:36.243090 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:00:36.243109 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:00:36.243127 kernel: iommu: Default domain type: Translated Feb 13 19:00:36.243182 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:00:36.243202 kernel: efivars: Registered efivars operations Feb 13 19:00:36.243220 kernel: vgaarb: loaded Feb 13 19:00:36.243239 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:00:36.243258 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:00:36.243276 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:00:36.243294 kernel: pnp: PnP ACPI init Feb 13 19:00:36.243540 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 19:00:36.243573 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:00:36.243593 kernel: NET: Registered PF_INET protocol family Feb 13 19:00:36.243611 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:00:36.243630 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:00:36.243648 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:00:36.243667 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:00:36.243686 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:00:36.243704 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:00:36.243722 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:00:36.243746 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:00:36.243764 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:00:36.243783 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:00:36.243801 kernel: kvm [1]: HYP mode not available Feb 13 19:00:36.243819 kernel: Initialise system trusted keyrings Feb 13 19:00:36.243838 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:00:36.243857 kernel: Key type asymmetric registered Feb 13 19:00:36.243875 kernel: Asymmetric key parser 'x509' registered Feb 13 19:00:36.243893 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:00:36.243917 kernel: io scheduler mq-deadline registered Feb 13 19:00:36.243935 kernel: io scheduler kyber registered Feb 13 19:00:36.243954 kernel: io scheduler bfq registered Feb 13 19:00:36.248439 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 19:00:36.248488 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:00:36.248508 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:00:36.248527 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 19:00:36.248546 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 19:00:36.248573 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:00:36.248594 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 19:00:36.248833 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 19:00:36.248860 kernel: printk: console [ttyS0] disabled Feb 13 19:00:36.248878 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 19:00:36.248897 kernel: printk: console [ttyS0] enabled Feb 13 19:00:36.251307 kernel: printk: bootconsole [uart0] disabled Feb 13 19:00:36.251355 kernel: thunder_xcv, ver 1.0 Feb 13 19:00:36.251374 kernel: thunder_bgx, ver 1.0 Feb 13 19:00:36.251393 kernel: nicpf, ver 1.0 Feb 13 19:00:36.251421 kernel: nicvf, ver 1.0 Feb 13 19:00:36.251705 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:00:36.251908 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:00:35 UTC (1739473235) Feb 13 19:00:36.251935 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:00:36.251955 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 19:00:36.252001 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:00:36.252023 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:00:36.252050 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:00:36.252070 kernel: Segment Routing with IPv6 Feb 13 19:00:36.252088 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:00:36.252107 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:00:36.252126 kernel: Key type dns_resolver registered Feb 13 19:00:36.252165 kernel: registered taskstats version 1 Feb 13 19:00:36.252186 kernel: Loading compiled-in X.509 certificates Feb 13 19:00:36.252205 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 58bec1a0c6b8a133d1af4ea745973da0351f7027' Feb 13 19:00:36.252223 kernel: Key type .fscrypt registered Feb 13 19:00:36.252241 kernel: Key type fscrypt-provisioning registered Feb 13 19:00:36.252266 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:00:36.252284 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:00:36.252302 kernel: ima: No architecture policies found Feb 13 19:00:36.252321 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:00:36.252339 kernel: clk: Disabling unused clocks Feb 13 19:00:36.252357 kernel: Freeing unused kernel memory: 38336K Feb 13 19:00:36.252375 kernel: Run /init as init process Feb 13 19:00:36.252411 kernel: with arguments: Feb 13 19:00:36.252434 kernel: /init Feb 13 19:00:36.252458 kernel: with environment: Feb 13 19:00:36.252476 kernel: HOME=/ Feb 13 19:00:36.252494 kernel: TERM=linux Feb 13 19:00:36.252512 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:00:36.252532 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:00:36.252557 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:00:36.252579 systemd[1]: Detected virtualization amazon. Feb 13 19:00:36.252603 systemd[1]: Detected architecture arm64. Feb 13 19:00:36.252622 systemd[1]: Running in initrd. Feb 13 19:00:36.252642 systemd[1]: No hostname configured, using default hostname. Feb 13 19:00:36.252662 systemd[1]: Hostname set to . Feb 13 19:00:36.252682 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:00:36.252701 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:00:36.252721 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:00:36.252741 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:00:36.252762 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:00:36.252787 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:00:36.252807 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:00:36.252829 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:00:36.252851 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:00:36.252871 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:00:36.252891 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:00:36.252916 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:00:36.252936 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:00:36.252955 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:00:36.252975 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:00:36.252995 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:00:36.253015 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:00:36.253034 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:00:36.253054 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:00:36.253074 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:00:36.253098 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:00:36.253118 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:00:36.254096 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:00:36.254300 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:00:36.254323 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:00:36.254344 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:00:36.254365 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:00:36.254385 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:00:36.254417 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:00:36.254438 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:00:36.254458 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:00:36.254478 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:00:36.254498 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:00:36.254519 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:00:36.254545 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:00:36.254618 systemd-journald[251]: Collecting audit messages is disabled. Feb 13 19:00:36.254662 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:00:36.254688 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:00:36.254708 kernel: Bridge firewalling registered Feb 13 19:00:36.254729 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:00:36.254749 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:00:36.254769 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:00:36.254790 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:00:36.254809 systemd-journald[251]: Journal started Feb 13 19:00:36.254851 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2f301f15cf65791f3f3126a313c55d) is 8M, max 75.3M, 67.3M free. Feb 13 19:00:36.170668 systemd-modules-load[252]: Inserted module 'overlay' Feb 13 19:00:36.215740 systemd-modules-load[252]: Inserted module 'br_netfilter' Feb 13 19:00:36.269625 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:00:36.273302 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:00:36.276183 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:00:36.291366 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:00:36.301328 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:00:36.310251 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:00:36.314690 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:00:36.336260 dracut-cmdline[282]: dracut-dracut-053 Feb 13 19:00:36.342966 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:00:36.357513 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:00:36.371453 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:00:36.452904 systemd-resolved[309]: Positive Trust Anchors: Feb 13 19:00:36.452939 systemd-resolved[309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:00:36.452996 systemd-resolved[309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:00:36.525181 kernel: SCSI subsystem initialized Feb 13 19:00:36.534165 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:00:36.545180 kernel: iscsi: registered transport (tcp) Feb 13 19:00:36.567182 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:00:36.567262 kernel: QLogic iSCSI HBA Driver Feb 13 19:00:36.685183 kernel: random: crng init done Feb 13 19:00:36.685439 systemd-resolved[309]: Defaulting to hostname 'linux'. Feb 13 19:00:36.688982 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:00:36.698101 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:00:36.711126 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:00:36.723389 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:00:36.766181 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:00:36.767164 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:00:36.769176 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:00:36.833201 kernel: raid6: neonx8 gen() 6585 MB/s Feb 13 19:00:36.850173 kernel: raid6: neonx4 gen() 6553 MB/s Feb 13 19:00:36.867172 kernel: raid6: neonx2 gen() 5438 MB/s Feb 13 19:00:36.884173 kernel: raid6: neonx1 gen() 3950 MB/s Feb 13 19:00:36.901172 kernel: raid6: int64x8 gen() 3622 MB/s Feb 13 19:00:36.918172 kernel: raid6: int64x4 gen() 3717 MB/s Feb 13 19:00:36.935172 kernel: raid6: int64x2 gen() 3607 MB/s Feb 13 19:00:36.952965 kernel: raid6: int64x1 gen() 2761 MB/s Feb 13 19:00:36.952997 kernel: raid6: using algorithm neonx8 gen() 6585 MB/s Feb 13 19:00:36.970919 kernel: raid6: .... xor() 4710 MB/s, rmw enabled Feb 13 19:00:36.970956 kernel: raid6: using neon recovery algorithm Feb 13 19:00:36.978176 kernel: xor: measuring software checksum speed Feb 13 19:00:36.978233 kernel: 8regs : 11938 MB/sec Feb 13 19:00:36.980173 kernel: 32regs : 11849 MB/sec Feb 13 19:00:36.982175 kernel: arm64_neon : 8970 MB/sec Feb 13 19:00:36.982207 kernel: xor: using function: 8regs (11938 MB/sec) Feb 13 19:00:37.065579 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:00:37.082931 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:00:37.092485 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:00:37.136983 systemd-udevd[473]: Using default interface naming scheme 'v255'. Feb 13 19:00:37.148134 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:00:37.162431 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:00:37.191349 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Feb 13 19:00:37.247501 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:00:37.264556 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:00:37.375602 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:00:37.392056 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:00:37.437674 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:00:37.445577 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:00:37.450610 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:00:37.454868 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:00:37.463446 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:00:37.506198 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:00:37.574642 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:00:37.574707 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 19:00:37.595743 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:00:37.596012 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:00:37.596720 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:2e:0d:33:11:2d Feb 13 19:00:37.598992 (udev-worker)[518]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:00:37.612207 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 19:00:37.613061 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:00:37.612804 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:00:37.613031 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:00:37.617593 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:00:37.621087 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:00:37.623840 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:00:37.633396 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:00:37.642199 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:00:37.646658 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:00:37.658023 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:00:37.658107 kernel: GPT:9289727 != 16777215 Feb 13 19:00:37.658133 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:00:37.659815 kernel: GPT:9289727 != 16777215 Feb 13 19:00:37.659869 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:00:37.664187 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:00:37.671279 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:00:37.681103 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:00:37.734899 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:00:37.749728 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (532) Feb 13 19:00:37.818203 kernel: BTRFS: device fsid 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (518) Feb 13 19:00:37.871481 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:00:37.897782 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:00:37.954055 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:00:37.976883 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:00:37.979469 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:00:37.995427 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:00:38.008779 disk-uuid[664]: Primary Header is updated. Feb 13 19:00:38.008779 disk-uuid[664]: Secondary Entries is updated. Feb 13 19:00:38.008779 disk-uuid[664]: Secondary Header is updated. Feb 13 19:00:38.018213 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:00:39.034517 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:00:39.037214 disk-uuid[665]: The operation has completed successfully. Feb 13 19:00:39.219613 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:00:39.219837 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:00:39.320467 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:00:39.344306 sh[925]: Success Feb 13 19:00:39.369416 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:00:39.470613 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:00:39.488357 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:00:39.492737 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:00:39.523571 kernel: BTRFS info (device dm-0): first mount of filesystem 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 Feb 13 19:00:39.523644 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:00:39.523671 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:00:39.526534 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:00:39.526569 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:00:39.647183 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:00:39.660512 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:00:39.664310 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:00:39.680404 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:00:39.688636 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:00:39.731964 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:00:39.732044 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:00:39.732077 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:00:39.741518 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:00:39.759103 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:00:39.762207 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:00:39.771066 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:00:39.781544 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:00:39.856302 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:00:39.870443 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:00:39.923163 systemd-networkd[1118]: lo: Link UP Feb 13 19:00:39.923185 systemd-networkd[1118]: lo: Gained carrier Feb 13 19:00:39.926518 systemd-networkd[1118]: Enumeration completed Feb 13 19:00:39.927234 systemd-networkd[1118]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:00:39.927241 systemd-networkd[1118]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:00:39.930992 systemd-networkd[1118]: eth0: Link UP Feb 13 19:00:39.931000 systemd-networkd[1118]: eth0: Gained carrier Feb 13 19:00:39.931017 systemd-networkd[1118]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:00:39.931375 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:00:39.935207 systemd[1]: Reached target network.target - Network. Feb 13 19:00:39.958245 systemd-networkd[1118]: eth0: DHCPv4 address 172.31.27.65/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:00:40.216800 ignition[1051]: Ignition 2.20.0 Feb 13 19:00:40.217356 ignition[1051]: Stage: fetch-offline Feb 13 19:00:40.217797 ignition[1051]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:00:40.217821 ignition[1051]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:00:40.218317 ignition[1051]: Ignition finished successfully Feb 13 19:00:40.227105 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:00:40.243049 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:00:40.264268 ignition[1128]: Ignition 2.20.0 Feb 13 19:00:40.264784 ignition[1128]: Stage: fetch Feb 13 19:00:40.265409 ignition[1128]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:00:40.265435 ignition[1128]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:00:40.265636 ignition[1128]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:00:40.290384 ignition[1128]: PUT result: OK Feb 13 19:00:40.294415 ignition[1128]: parsed url from cmdline: "" Feb 13 19:00:40.294440 ignition[1128]: no config URL provided Feb 13 19:00:40.294455 ignition[1128]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:00:40.294481 ignition[1128]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:00:40.294516 ignition[1128]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:00:40.298129 ignition[1128]: PUT result: OK Feb 13 19:00:40.298528 ignition[1128]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:00:40.305136 ignition[1128]: GET result: OK Feb 13 19:00:40.306436 ignition[1128]: parsing config with SHA512: 8cd09575911da3e4cb33731eb75dcbf1aa59e014d45c949d4e3fc72ad16e7129733ffc457765c09b11d93e122bf2059eb8660c4a874fbe738dacf223ae445060 Feb 13 19:00:40.316223 unknown[1128]: fetched base config from "system" Feb 13 19:00:40.316901 ignition[1128]: fetch: fetch complete Feb 13 19:00:40.316246 unknown[1128]: fetched base config from "system" Feb 13 19:00:40.316913 ignition[1128]: fetch: fetch passed Feb 13 19:00:40.316260 unknown[1128]: fetched user config from "aws" Feb 13 19:00:40.316992 ignition[1128]: Ignition finished successfully Feb 13 19:00:40.320730 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:00:40.337394 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:00:40.382449 ignition[1135]: Ignition 2.20.0 Feb 13 19:00:40.382936 ignition[1135]: Stage: kargs Feb 13 19:00:40.383587 ignition[1135]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:00:40.383612 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:00:40.383783 ignition[1135]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:00:40.386671 ignition[1135]: PUT result: OK Feb 13 19:00:40.394261 ignition[1135]: kargs: kargs passed Feb 13 19:00:40.394353 ignition[1135]: Ignition finished successfully Feb 13 19:00:40.399785 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:00:40.408493 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:00:40.441440 ignition[1141]: Ignition 2.20.0 Feb 13 19:00:40.441470 ignition[1141]: Stage: disks Feb 13 19:00:40.442555 ignition[1141]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:00:40.442588 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:00:40.442741 ignition[1141]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:00:40.444497 ignition[1141]: PUT result: OK Feb 13 19:00:40.452472 ignition[1141]: disks: disks passed Feb 13 19:00:40.456273 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:00:40.452561 ignition[1141]: Ignition finished successfully Feb 13 19:00:40.461498 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:00:40.464041 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:00:40.467364 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:00:40.471053 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:00:40.472936 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:00:40.495486 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:00:40.536624 systemd-fsck[1149]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:00:40.545212 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:00:40.556286 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:00:40.653190 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 24882d04-b1a5-4a27-95f1-925956e69b18 r/w with ordered data mode. Quota mode: none. Feb 13 19:00:40.654565 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:00:40.656858 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:00:40.675378 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:00:40.682512 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:00:40.686602 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:00:40.686687 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:00:40.686737 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:00:40.709176 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1168) Feb 13 19:00:40.714302 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:00:40.714365 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:00:40.714391 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:00:40.715547 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:00:40.724203 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:00:40.725419 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:00:40.732087 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:00:41.160237 initrd-setup-root[1192]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:00:41.168875 initrd-setup-root[1199]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:00:41.177416 initrd-setup-root[1206]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:00:41.185818 initrd-setup-root[1213]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:00:41.411450 systemd-networkd[1118]: eth0: Gained IPv6LL Feb 13 19:00:41.520734 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:00:41.531353 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:00:41.548476 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:00:41.566108 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:00:41.569279 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:00:41.602876 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:00:41.613959 ignition[1280]: INFO : Ignition 2.20.0 Feb 13 19:00:41.613959 ignition[1280]: INFO : Stage: mount Feb 13 19:00:41.618047 ignition[1280]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:00:41.618047 ignition[1280]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:00:41.618047 ignition[1280]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:00:41.618047 ignition[1280]: INFO : PUT result: OK Feb 13 19:00:41.628624 ignition[1280]: INFO : mount: mount passed Feb 13 19:00:41.630315 ignition[1280]: INFO : Ignition finished successfully Feb 13 19:00:41.632666 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:00:41.648338 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:00:41.671504 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:00:41.697851 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1292) Feb 13 19:00:41.697924 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:00:41.699780 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:00:41.699816 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:00:41.707182 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:00:41.710414 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:00:41.749538 ignition[1309]: INFO : Ignition 2.20.0 Feb 13 19:00:41.749538 ignition[1309]: INFO : Stage: files Feb 13 19:00:41.752766 ignition[1309]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:00:41.752766 ignition[1309]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:00:41.752766 ignition[1309]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:00:41.759470 ignition[1309]: INFO : PUT result: OK Feb 13 19:00:41.763489 ignition[1309]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:00:41.777301 ignition[1309]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:00:41.777301 ignition[1309]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:00:41.803817 ignition[1309]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:00:41.806493 ignition[1309]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:00:41.808927 ignition[1309]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:00:41.807172 unknown[1309]: wrote ssh authorized keys file for user: core Feb 13 19:00:41.820697 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:00:41.824305 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:00:42.029390 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:00:42.817090 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:00:42.821296 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:00:42.821296 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 19:00:43.342128 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:00:43.484379 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:00:43.487694 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:00:43.487694 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:00:43.487694 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:00:43.487694 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:00:43.487694 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:00:43.487694 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:00:43.487694 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:00:43.487694 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:00:43.487694 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:00:43.487694 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:00:43.487694 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:00:43.487694 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:00:43.487694 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:00:43.487694 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:00:43.896546 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:00:44.254177 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:00:44.254177 ignition[1309]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:00:44.261396 ignition[1309]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:00:44.261396 ignition[1309]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:00:44.261396 ignition[1309]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:00:44.261396 ignition[1309]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:00:44.261396 ignition[1309]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:00:44.261396 ignition[1309]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:00:44.261396 ignition[1309]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:00:44.261396 ignition[1309]: INFO : files: files passed Feb 13 19:00:44.261396 ignition[1309]: INFO : Ignition finished successfully Feb 13 19:00:44.277114 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:00:44.295495 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:00:44.300809 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:00:44.317829 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:00:44.318015 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:00:44.334258 initrd-setup-root-after-ignition[1337]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:00:44.334258 initrd-setup-root-after-ignition[1337]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:00:44.340289 initrd-setup-root-after-ignition[1341]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:00:44.346464 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:00:44.349555 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:00:44.365467 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:00:44.420010 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:00:44.422250 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:00:44.424882 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:00:44.428997 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:00:44.430947 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:00:44.446460 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:00:44.473105 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:00:44.488442 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:00:44.509106 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:00:44.512522 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:00:44.515895 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:00:44.517598 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:00:44.517832 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:00:44.523624 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:00:44.525747 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:00:44.527889 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:00:44.531169 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:00:44.533418 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:00:44.535771 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:00:44.552038 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:00:44.554798 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:00:44.557324 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:00:44.561238 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:00:44.562919 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:00:44.563175 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:00:44.571492 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:00:44.578336 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:00:44.580920 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:00:44.582722 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:00:44.585987 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:00:44.586237 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:00:44.595778 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:00:44.596006 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:00:44.598404 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:00:44.598599 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:00:44.616221 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:00:44.618582 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:00:44.620810 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:00:44.640995 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:00:44.643589 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:00:44.646794 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:00:44.653543 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:00:44.668563 ignition[1361]: INFO : Ignition 2.20.0 Feb 13 19:00:44.668563 ignition[1361]: INFO : Stage: umount Feb 13 19:00:44.668563 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:00:44.668563 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:00:44.668563 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:00:44.653937 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:00:44.690338 ignition[1361]: INFO : PUT result: OK Feb 13 19:00:44.690338 ignition[1361]: INFO : umount: umount passed Feb 13 19:00:44.690338 ignition[1361]: INFO : Ignition finished successfully Feb 13 19:00:44.687625 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:00:44.687838 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:00:44.709690 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:00:44.710442 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:00:44.721801 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:00:44.722083 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:00:44.728955 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:00:44.729452 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:00:44.739469 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:00:44.739585 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:00:44.742436 systemd[1]: Stopped target network.target - Network. Feb 13 19:00:44.744831 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:00:44.744937 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:00:44.747346 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:00:44.752256 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:00:44.758623 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:00:44.763772 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:00:44.769571 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:00:44.773046 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:00:44.773130 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:00:44.778945 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:00:44.779016 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:00:44.780900 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:00:44.780985 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:00:44.782859 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:00:44.782936 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:00:44.785082 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:00:44.787126 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:00:44.803483 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:00:44.805052 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:00:44.805247 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:00:44.809099 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:00:44.809382 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:00:44.812487 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:00:44.813251 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:00:44.836564 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 19:00:44.837008 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:00:44.837339 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:00:44.847118 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:00:44.849515 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:00:44.849632 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:00:44.881323 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:00:44.883381 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:00:44.883502 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:00:44.885910 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:00:44.886009 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:00:44.889914 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:00:44.889995 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:00:44.893222 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:00:44.893310 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:00:44.899092 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:00:44.902895 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:00:44.903028 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:00:44.939394 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:00:44.940805 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:00:44.946911 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:00:44.947241 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:00:44.954981 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:00:44.955093 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:00:44.959046 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:00:44.959116 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:00:44.962190 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:00:44.962281 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:00:44.975018 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:00:44.975107 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:00:44.978801 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:00:44.978891 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:00:44.992442 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:00:44.995755 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:00:44.995877 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:00:45.008594 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:00:45.008973 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:00:45.018089 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 19:00:45.018241 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:00:45.018888 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:00:45.019491 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:00:45.028239 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:00:45.052511 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:00:45.069097 systemd[1]: Switching root. Feb 13 19:00:45.120053 systemd-journald[251]: Journal stopped Feb 13 19:00:47.536641 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Feb 13 19:00:47.536770 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:00:47.536813 kernel: SELinux: policy capability open_perms=1 Feb 13 19:00:47.536843 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:00:47.536887 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:00:47.536916 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:00:47.536946 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:00:47.536976 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:00:47.537005 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:00:47.537034 kernel: audit: type=1403 audit(1739473245.620:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:00:47.537072 systemd[1]: Successfully loaded SELinux policy in 80.097ms. Feb 13 19:00:47.537115 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 24.133ms. Feb 13 19:00:47.537165 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:00:47.537205 systemd[1]: Detected virtualization amazon. Feb 13 19:00:47.537237 systemd[1]: Detected architecture arm64. Feb 13 19:00:47.537267 systemd[1]: Detected first boot. Feb 13 19:00:47.537298 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:00:47.537326 zram_generator::config[1406]: No configuration found. Feb 13 19:00:47.537361 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:00:47.537390 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:00:47.537421 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:00:47.537455 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:00:47.537484 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:00:47.537516 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:00:47.537548 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:00:47.537578 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:00:47.537610 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:00:47.537641 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:00:47.537672 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:00:47.537702 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:00:47.537736 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:00:47.537767 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:00:47.537807 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:00:47.537836 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:00:47.537875 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:00:47.537906 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:00:47.537942 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:00:47.537973 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:00:47.538008 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:00:47.538037 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:00:47.538066 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:00:47.538095 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:00:47.538126 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:00:47.540423 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:00:47.540466 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:00:47.540498 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:00:47.540538 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:00:47.540569 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:00:47.540598 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:00:47.540630 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:00:47.540661 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:00:47.540692 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:00:47.540723 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:00:47.540751 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:00:47.540780 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:00:47.540813 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:00:47.540844 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:00:47.540879 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:00:47.540908 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:00:47.540939 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:00:47.540970 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:00:47.541000 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:00:47.541029 systemd[1]: Reached target machines.target - Containers. Feb 13 19:00:47.541060 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:00:47.541095 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:00:47.541123 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:00:47.541174 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:00:47.541207 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:00:47.541238 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:00:47.541267 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:00:47.541298 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:00:47.541328 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:00:47.541362 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:00:47.541392 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:00:47.541420 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:00:47.541448 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:00:47.541478 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:00:47.541508 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:00:47.541536 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:00:47.541563 kernel: fuse: init (API version 7.39) Feb 13 19:00:47.541591 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:00:47.541626 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:00:47.541657 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:00:47.541687 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:00:47.541716 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:00:47.541750 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:00:47.541779 systemd[1]: Stopped verity-setup.service. Feb 13 19:00:47.541808 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:00:47.541840 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:00:47.541868 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:00:47.541899 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:00:47.541928 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:00:47.541956 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:00:47.541985 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:00:47.542021 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:00:47.542050 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:00:47.542077 kernel: loop: module loaded Feb 13 19:00:47.542104 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:00:47.542132 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:00:47.547081 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:00:47.547130 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:00:47.547181 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:00:47.547216 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:00:47.547294 systemd-journald[1489]: Collecting audit messages is disabled. Feb 13 19:00:47.547361 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:00:47.547396 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:00:47.547425 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:00:47.547454 systemd-journald[1489]: Journal started Feb 13 19:00:47.547504 systemd-journald[1489]: Runtime Journal (/run/log/journal/ec2f301f15cf65791f3f3126a313c55d) is 8M, max 75.3M, 67.3M free. Feb 13 19:00:46.976742 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:00:46.989419 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:00:46.990277 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:00:47.555237 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:00:47.555312 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:00:47.566192 kernel: ACPI: bus type drm_connector registered Feb 13 19:00:47.569471 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:00:47.571298 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:00:47.592003 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:00:47.612430 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:00:47.621025 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:00:47.633403 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:00:47.648341 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:00:47.652408 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:00:47.652491 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:00:47.658172 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:00:47.676536 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:00:47.691442 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:00:47.693670 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:00:47.704576 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:00:47.710442 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:00:47.712702 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:00:47.724345 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:00:47.726421 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:00:47.737828 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:00:47.752657 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:00:47.762602 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:00:47.766079 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:00:47.769303 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:00:47.785708 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:00:47.789007 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:00:47.800357 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:00:47.815330 systemd-journald[1489]: Time spent on flushing to /var/log/journal/ec2f301f15cf65791f3f3126a313c55d is 103.200ms for 922 entries. Feb 13 19:00:47.815330 systemd-journald[1489]: System Journal (/var/log/journal/ec2f301f15cf65791f3f3126a313c55d) is 8M, max 195.6M, 187.6M free. Feb 13 19:00:47.931647 kernel: loop0: detected capacity change from 0 to 123192 Feb 13 19:00:47.931715 systemd-journald[1489]: Received client request to flush runtime journal. Feb 13 19:00:47.818234 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:00:47.831573 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:00:47.845467 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:00:47.860451 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:00:47.864771 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:00:47.921220 udevadm[1551]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:00:47.936351 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:00:47.965482 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:00:47.966266 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:00:47.989376 kernel: loop1: detected capacity change from 0 to 194096 Feb 13 19:00:47.993790 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:00:48.007312 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:00:48.019441 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:00:48.062250 kernel: loop2: detected capacity change from 0 to 53784 Feb 13 19:00:48.075893 systemd-tmpfiles[1561]: ACLs are not supported, ignoring. Feb 13 19:00:48.076741 systemd-tmpfiles[1561]: ACLs are not supported, ignoring. Feb 13 19:00:48.091346 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:00:48.148489 kernel: loop3: detected capacity change from 0 to 113512 Feb 13 19:00:48.275237 kernel: loop4: detected capacity change from 0 to 123192 Feb 13 19:00:48.297307 kernel: loop5: detected capacity change from 0 to 194096 Feb 13 19:00:48.335368 kernel: loop6: detected capacity change from 0 to 53784 Feb 13 19:00:48.361341 kernel: loop7: detected capacity change from 0 to 113512 Feb 13 19:00:48.376652 (sd-merge)[1567]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:00:48.378512 (sd-merge)[1567]: Merged extensions into '/usr'. Feb 13 19:00:48.393064 systemd[1]: Reload requested from client PID 1540 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:00:48.393225 systemd[1]: Reloading... Feb 13 19:00:48.560192 zram_generator::config[1594]: No configuration found. Feb 13 19:00:48.927977 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:00:49.087537 systemd[1]: Reloading finished in 693 ms. Feb 13 19:00:49.109755 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:00:49.112955 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:00:49.129522 systemd[1]: Starting ensure-sysext.service... Feb 13 19:00:49.139496 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:00:49.145510 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:00:49.173271 systemd[1]: Reload requested from client PID 1650 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:00:49.173302 systemd[1]: Reloading... Feb 13 19:00:49.207745 systemd-tmpfiles[1651]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:00:49.210590 systemd-tmpfiles[1651]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:00:49.213444 systemd-tmpfiles[1651]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:00:49.213994 systemd-tmpfiles[1651]: ACLs are not supported, ignoring. Feb 13 19:00:49.214128 systemd-tmpfiles[1651]: ACLs are not supported, ignoring. Feb 13 19:00:49.234179 systemd-tmpfiles[1651]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:00:49.234204 systemd-tmpfiles[1651]: Skipping /boot Feb 13 19:00:49.277695 systemd-udevd[1652]: Using default interface naming scheme 'v255'. Feb 13 19:00:49.301492 systemd-tmpfiles[1651]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:00:49.301521 systemd-tmpfiles[1651]: Skipping /boot Feb 13 19:00:49.420173 zram_generator::config[1708]: No configuration found. Feb 13 19:00:49.446379 ldconfig[1535]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:00:49.616915 (udev-worker)[1693]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:00:49.824821 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:00:49.876326 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1688) Feb 13 19:00:50.033736 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:00:50.036037 systemd[1]: Reloading finished in 862 ms. Feb 13 19:00:50.049482 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:00:50.052647 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:00:50.093003 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:00:50.141210 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:00:50.151213 systemd[1]: Finished ensure-sysext.service. Feb 13 19:00:50.202583 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:00:50.220496 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:00:50.229418 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:00:50.231934 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:00:50.239479 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:00:50.245456 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:00:50.252550 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:00:50.258474 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:00:50.269522 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:00:50.273407 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:00:50.282463 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:00:50.285344 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:00:50.287440 lvm[1852]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:00:50.290454 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:00:50.298453 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:00:50.307965 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:00:50.310060 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:00:50.315272 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:00:50.324605 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:00:50.328984 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:00:50.331227 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:00:50.346858 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:00:50.390848 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:00:50.393399 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:00:50.398137 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:00:50.399302 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:00:50.405043 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:00:50.412286 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:00:50.440581 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:00:50.447369 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:00:50.468941 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:00:50.478099 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:00:50.481889 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:00:50.482262 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:00:50.489673 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:00:50.500882 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:00:50.501652 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:00:50.520253 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:00:50.534378 lvm[1892]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:00:50.572781 augenrules[1898]: No rules Feb 13 19:00:50.576927 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:00:50.580517 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:00:50.597269 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:00:50.609109 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:00:50.631879 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:00:50.632860 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:00:50.639310 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:00:50.756620 systemd-resolved[1866]: Positive Trust Anchors: Feb 13 19:00:50.756658 systemd-resolved[1866]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:00:50.756722 systemd-resolved[1866]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:00:50.761773 systemd-networkd[1865]: lo: Link UP Feb 13 19:00:50.761788 systemd-networkd[1865]: lo: Gained carrier Feb 13 19:00:50.764757 systemd-resolved[1866]: Defaulting to hostname 'linux'. Feb 13 19:00:50.765586 systemd-networkd[1865]: Enumeration completed Feb 13 19:00:50.765864 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:00:50.767944 systemd-networkd[1865]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:00:50.768073 systemd-networkd[1865]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:00:50.770083 systemd-networkd[1865]: eth0: Link UP Feb 13 19:00:50.770557 systemd-networkd[1865]: eth0: Gained carrier Feb 13 19:00:50.770701 systemd-networkd[1865]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:00:50.776298 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:00:50.788286 systemd-networkd[1865]: eth0: DHCPv4 address 172.31.27.65/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:00:50.788646 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:00:50.791437 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:00:50.793885 systemd[1]: Reached target network.target - Network. Feb 13 19:00:50.797954 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:00:50.800233 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:00:50.803331 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:00:50.806000 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:00:50.808731 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:00:50.810992 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:00:50.817336 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:00:50.819708 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:00:50.819889 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:00:50.822278 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:00:50.825509 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:00:50.830183 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:00:50.837213 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:00:50.840127 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:00:50.842548 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:00:50.848285 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:00:50.851733 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:00:50.857198 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:00:50.860033 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:00:50.863343 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:00:50.865492 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:00:50.867695 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:00:50.867966 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:00:50.874349 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:00:50.884745 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:00:50.890619 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:00:50.896391 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:00:50.904528 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:00:50.907407 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:00:50.920426 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:00:50.930494 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:00:50.939005 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:00:50.945404 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:00:50.950300 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:00:50.960017 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:00:50.976012 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:00:50.980801 jq[1925]: false Feb 13 19:00:50.983752 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:00:50.986751 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:00:50.994809 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:00:51.010376 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:00:51.029885 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:00:51.032229 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:00:51.035973 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:00:51.038723 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:00:51.063942 dbus-daemon[1924]: [system] SELinux support is enabled Feb 13 19:00:51.068708 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:00:51.084421 extend-filesystems[1926]: Found loop4 Feb 13 19:00:51.084421 extend-filesystems[1926]: Found loop5 Feb 13 19:00:51.084421 extend-filesystems[1926]: Found loop6 Feb 13 19:00:51.084421 extend-filesystems[1926]: Found loop7 Feb 13 19:00:51.084421 extend-filesystems[1926]: Found nvme0n1 Feb 13 19:00:51.084421 extend-filesystems[1926]: Found nvme0n1p1 Feb 13 19:00:51.084421 extend-filesystems[1926]: Found nvme0n1p2 Feb 13 19:00:51.084421 extend-filesystems[1926]: Found nvme0n1p3 Feb 13 19:00:51.082828 dbus-daemon[1924]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1865 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:00:51.104291 jq[1938]: true Feb 13 19:00:51.120254 extend-filesystems[1926]: Found usr Feb 13 19:00:51.120254 extend-filesystems[1926]: Found nvme0n1p4 Feb 13 19:00:51.120254 extend-filesystems[1926]: Found nvme0n1p6 Feb 13 19:00:51.120254 extend-filesystems[1926]: Found nvme0n1p7 Feb 13 19:00:51.120254 extend-filesystems[1926]: Found nvme0n1p9 Feb 13 19:00:51.120254 extend-filesystems[1926]: Checking size of /dev/nvme0n1p9 Feb 13 19:00:51.145069 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:00:51.145172 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:00:51.148415 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:00:51.148473 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:00:51.150740 (ntainerd)[1951]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:00:51.157032 dbus-daemon[1924]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:00:51.163384 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:00:51.163833 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:00:51.175432 update_engine[1937]: I20250213 19:00:51.174753 1937 main.cc:92] Flatcar Update Engine starting Feb 13 19:00:51.185698 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:00:51.206176 tar[1943]: linux-arm64/helm Feb 13 19:00:51.206933 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:00:51.209783 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:00:51.215417 update_engine[1937]: I20250213 19:00:51.215102 1937 update_check_scheduler.cc:74] Next update check in 7m17s Feb 13 19:00:51.219604 jq[1954]: true Feb 13 19:00:51.234544 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:00:51.235041 extend-filesystems[1926]: Resized partition /dev/nvme0n1p9 Feb 13 19:00:51.246064 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:02:48 UTC 2025 (1): Starting Feb 13 19:00:51.246064 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:00:51.246064 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: ---------------------------------------------------- Feb 13 19:00:51.246064 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:00:51.246064 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:00:51.246064 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: corporation. Support and training for ntp-4 are Feb 13 19:00:51.246064 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: available at https://www.nwtime.org/support Feb 13 19:00:51.246064 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: ---------------------------------------------------- Feb 13 19:00:51.246064 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: proto: precision = 0.096 usec (-23) Feb 13 19:00:51.275779 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:00:51.226897 ntpd[1928]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:02:48 UTC 2025 (1): Starting Feb 13 19:00:51.276214 extend-filesystems[1973]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:00:51.293375 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: basedate set to 2025-02-01 Feb 13 19:00:51.293375 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: gps base set to 2025-02-02 (week 2352) Feb 13 19:00:51.293375 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:00:51.293375 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:00:51.293375 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:00:51.293375 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: Listen normally on 3 eth0 172.31.27.65:123 Feb 13 19:00:51.293375 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: Listen normally on 4 lo [::1]:123 Feb 13 19:00:51.293375 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: bind(21) AF_INET6 fe80::42e:dff:fe33:112d%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:00:51.293375 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: unable to create socket on eth0 (5) for fe80::42e:dff:fe33:112d%2#123 Feb 13 19:00:51.293375 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: failed to init interface for address fe80::42e:dff:fe33:112d%2 Feb 13 19:00:51.293375 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: Listening on routing socket on fd #21 for interface updates Feb 13 19:00:51.226943 ntpd[1928]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:00:51.226962 ntpd[1928]: ---------------------------------------------------- Feb 13 19:00:51.226980 ntpd[1928]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:00:51.226998 ntpd[1928]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:00:51.227016 ntpd[1928]: corporation. Support and training for ntp-4 are Feb 13 19:00:51.227033 ntpd[1928]: available at https://www.nwtime.org/support Feb 13 19:00:51.227051 ntpd[1928]: ---------------------------------------------------- Feb 13 19:00:51.244945 ntpd[1928]: proto: precision = 0.096 usec (-23) Feb 13 19:00:51.252588 ntpd[1928]: basedate set to 2025-02-01 Feb 13 19:00:51.252621 ntpd[1928]: gps base set to 2025-02-02 (week 2352) Feb 13 19:00:51.263585 ntpd[1928]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:00:51.263663 ntpd[1928]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:00:51.276864 ntpd[1928]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:00:51.276933 ntpd[1928]: Listen normally on 3 eth0 172.31.27.65:123 Feb 13 19:00:51.277009 ntpd[1928]: Listen normally on 4 lo [::1]:123 Feb 13 19:00:51.277089 ntpd[1928]: bind(21) AF_INET6 fe80::42e:dff:fe33:112d%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:00:51.277131 ntpd[1928]: unable to create socket on eth0 (5) for fe80::42e:dff:fe33:112d%2#123 Feb 13 19:00:51.277346 ntpd[1928]: failed to init interface for address fe80::42e:dff:fe33:112d%2 Feb 13 19:00:51.277408 ntpd[1928]: Listening on routing socket on fd #21 for interface updates Feb 13 19:00:51.318851 ntpd[1928]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:00:51.324110 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:00:51.324110 ntpd[1928]: 13 Feb 19:00:51 ntpd[1928]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:00:51.318901 ntpd[1928]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:00:51.364902 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:00:51.373396 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1698) Feb 13 19:00:51.421180 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:00:51.441645 extend-filesystems[1973]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:00:51.441645 extend-filesystems[1973]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:00:51.441645 extend-filesystems[1973]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:00:51.470161 extend-filesystems[1926]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:00:51.457002 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:00:51.460032 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:00:51.523596 coreos-metadata[1923]: Feb 13 19:00:51.520 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:00:51.527956 coreos-metadata[1923]: Feb 13 19:00:51.525 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:00:51.527956 coreos-metadata[1923]: Feb 13 19:00:51.527 INFO Fetch successful Feb 13 19:00:51.527956 coreos-metadata[1923]: Feb 13 19:00:51.527 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:00:51.534698 coreos-metadata[1923]: Feb 13 19:00:51.532 INFO Fetch successful Feb 13 19:00:51.535080 coreos-metadata[1923]: Feb 13 19:00:51.532 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:00:51.535080 coreos-metadata[1923]: Feb 13 19:00:51.535 INFO Fetch successful Feb 13 19:00:51.535080 coreos-metadata[1923]: Feb 13 19:00:51.535 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:00:51.536556 coreos-metadata[1923]: Feb 13 19:00:51.536 INFO Fetch successful Feb 13 19:00:51.536556 coreos-metadata[1923]: Feb 13 19:00:51.536 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:00:51.539529 coreos-metadata[1923]: Feb 13 19:00:51.539 INFO Fetch failed with 404: resource not found Feb 13 19:00:51.539529 coreos-metadata[1923]: Feb 13 19:00:51.539 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:00:51.540420 bash[2006]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:00:51.554509 coreos-metadata[1923]: Feb 13 19:00:51.540 INFO Fetch successful Feb 13 19:00:51.554509 coreos-metadata[1923]: Feb 13 19:00:51.549 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:00:51.554509 coreos-metadata[1923]: Feb 13 19:00:51.549 INFO Fetch successful Feb 13 19:00:51.554509 coreos-metadata[1923]: Feb 13 19:00:51.549 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:00:51.544941 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:00:51.555955 coreos-metadata[1923]: Feb 13 19:00:51.555 INFO Fetch successful Feb 13 19:00:51.555955 coreos-metadata[1923]: Feb 13 19:00:51.555 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:00:51.564172 coreos-metadata[1923]: Feb 13 19:00:51.559 INFO Fetch successful Feb 13 19:00:51.564172 coreos-metadata[1923]: Feb 13 19:00:51.559 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:00:51.560597 systemd[1]: Starting sshkeys.service... Feb 13 19:00:51.566494 coreos-metadata[1923]: Feb 13 19:00:51.566 INFO Fetch successful Feb 13 19:00:51.608097 systemd-logind[1936]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:00:51.617226 systemd-logind[1936]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 19:00:51.619222 systemd-logind[1936]: New seat seat0. Feb 13 19:00:51.633317 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:00:51.662717 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:00:51.693892 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:00:51.720248 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:00:51.732935 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:00:51.750013 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:00:51.756949 dbus-daemon[1924]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:00:51.762786 dbus-daemon[1924]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1965 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:00:51.850791 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:00:51.882810 coreos-metadata[2025]: Feb 13 19:00:51.882 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:00:51.885122 coreos-metadata[2025]: Feb 13 19:00:51.884 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:00:51.887077 coreos-metadata[2025]: Feb 13 19:00:51.885 INFO Fetch successful Feb 13 19:00:51.887077 coreos-metadata[2025]: Feb 13 19:00:51.887 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:00:51.889845 coreos-metadata[2025]: Feb 13 19:00:51.889 INFO Fetch successful Feb 13 19:00:51.894318 unknown[2025]: wrote ssh authorized keys file for user: core Feb 13 19:00:51.895876 polkitd[2050]: Started polkitd version 121 Feb 13 19:00:51.937780 polkitd[2050]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:00:51.937920 polkitd[2050]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:00:51.943967 polkitd[2050]: Finished loading, compiling and executing 2 rules Feb 13 19:00:51.949203 sshd_keygen[1964]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:00:51.952851 dbus-daemon[1924]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:00:51.953265 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:00:51.955126 polkitd[2050]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:00:51.971448 update-ssh-keys[2067]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:00:51.971357 systemd-networkd[1865]: eth0: Gained IPv6LL Feb 13 19:00:51.972945 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:00:51.986799 systemd[1]: Finished sshkeys.service. Feb 13 19:00:51.990199 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:00:51.996998 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:00:52.051888 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:00:52.061514 locksmithd[1971]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:00:52.062990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:00:52.070022 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:00:52.092455 systemd-hostnamed[1965]: Hostname set to (transient) Feb 13 19:00:52.094069 systemd-resolved[1866]: System hostname changed to 'ip-172-31-27-65'. Feb 13 19:00:52.100745 containerd[1951]: time="2025-02-13T19:00:52.100322230Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:00:52.190703 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:00:52.205601 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:00:52.216701 systemd[1]: Started sshd@0-172.31.27.65:22-139.178.89.65:39944.service - OpenSSH per-connection server daemon (139.178.89.65:39944). Feb 13 19:00:52.253166 amazon-ssm-agent[2092]: Initializing new seelog logger Feb 13 19:00:52.253166 amazon-ssm-agent[2092]: New Seelog Logger Creation Complete Feb 13 19:00:52.253166 amazon-ssm-agent[2092]: 2025/02/13 19:00:52 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:00:52.253166 amazon-ssm-agent[2092]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:00:52.253166 amazon-ssm-agent[2092]: 2025/02/13 19:00:52 processing appconfig overrides Feb 13 19:00:52.253166 amazon-ssm-agent[2092]: 2025/02/13 19:00:52 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:00:52.253166 amazon-ssm-agent[2092]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:00:52.253166 amazon-ssm-agent[2092]: 2025/02/13 19:00:52 processing appconfig overrides Feb 13 19:00:52.253166 amazon-ssm-agent[2092]: 2025/02/13 19:00:52 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:00:52.259174 amazon-ssm-agent[2092]: 2025-02-13 19:00:52 INFO Proxy environment variables: Feb 13 19:00:52.268850 amazon-ssm-agent[2092]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:00:52.268850 amazon-ssm-agent[2092]: 2025/02/13 19:00:52 processing appconfig overrides Feb 13 19:00:52.270595 amazon-ssm-agent[2092]: 2025/02/13 19:00:52 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:00:52.270595 amazon-ssm-agent[2092]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:00:52.270767 amazon-ssm-agent[2092]: 2025/02/13 19:00:52 processing appconfig overrides Feb 13 19:00:52.294889 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:00:52.297288 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:00:52.314900 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:00:52.317899 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:00:52.363383 amazon-ssm-agent[2092]: 2025-02-13 19:00:52 INFO https_proxy: Feb 13 19:00:52.372220 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:00:52.385887 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:00:52.390319 containerd[1951]: time="2025-02-13T19:00:52.387890411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:00:52.394084 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:00:52.398692 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:00:52.423431 containerd[1951]: time="2025-02-13T19:00:52.417740412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:00:52.423431 containerd[1951]: time="2025-02-13T19:00:52.417806460Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:00:52.423431 containerd[1951]: time="2025-02-13T19:00:52.417840864Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:00:52.423431 containerd[1951]: time="2025-02-13T19:00:52.418136256Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:00:52.423431 containerd[1951]: time="2025-02-13T19:00:52.418194924Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:00:52.423431 containerd[1951]: time="2025-02-13T19:00:52.418317276Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:00:52.423431 containerd[1951]: time="2025-02-13T19:00:52.418345392Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:00:52.423431 containerd[1951]: time="2025-02-13T19:00:52.418672116Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:00:52.423431 containerd[1951]: time="2025-02-13T19:00:52.418703472Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:00:52.423431 containerd[1951]: time="2025-02-13T19:00:52.418734984Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:00:52.423431 containerd[1951]: time="2025-02-13T19:00:52.418757952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:00:52.423982 containerd[1951]: time="2025-02-13T19:00:52.418917756Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:00:52.433626 containerd[1951]: time="2025-02-13T19:00:52.433564584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:00:52.438047 containerd[1951]: time="2025-02-13T19:00:52.435319164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:00:52.438047 containerd[1951]: time="2025-02-13T19:00:52.435398736Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:00:52.438406 containerd[1951]: time="2025-02-13T19:00:52.438362340Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:00:52.449950 containerd[1951]: time="2025-02-13T19:00:52.446332692Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:00:52.457202 containerd[1951]: time="2025-02-13T19:00:52.457128372Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:00:52.459189 containerd[1951]: time="2025-02-13T19:00:52.457405248Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:00:52.459189 containerd[1951]: time="2025-02-13T19:00:52.457582752Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:00:52.459189 containerd[1951]: time="2025-02-13T19:00:52.457639908Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:00:52.459189 containerd[1951]: time="2025-02-13T19:00:52.457675980Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:00:52.459189 containerd[1951]: time="2025-02-13T19:00:52.457934052Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:00:52.459189 containerd[1951]: time="2025-02-13T19:00:52.458353404Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:00:52.459189 containerd[1951]: time="2025-02-13T19:00:52.458539308Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:00:52.459189 containerd[1951]: time="2025-02-13T19:00:52.458579676Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:00:52.459189 containerd[1951]: time="2025-02-13T19:00:52.458614920Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:00:52.459189 containerd[1951]: time="2025-02-13T19:00:52.458648280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:00:52.459189 containerd[1951]: time="2025-02-13T19:00:52.458684508Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:00:52.459189 containerd[1951]: time="2025-02-13T19:00:52.458715060Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:00:52.459189 containerd[1951]: time="2025-02-13T19:00:52.458748252Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:00:52.459189 containerd[1951]: time="2025-02-13T19:00:52.458781024Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:00:52.459905 containerd[1951]: time="2025-02-13T19:00:52.458811084Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:00:52.459905 containerd[1951]: time="2025-02-13T19:00:52.458841180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:00:52.459905 containerd[1951]: time="2025-02-13T19:00:52.458872020Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:00:52.459905 containerd[1951]: time="2025-02-13T19:00:52.458911812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:00:52.459905 containerd[1951]: time="2025-02-13T19:00:52.458944044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:00:52.459905 containerd[1951]: time="2025-02-13T19:00:52.458973852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:00:52.459905 containerd[1951]: time="2025-02-13T19:00:52.459004452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:00:52.459905 containerd[1951]: time="2025-02-13T19:00:52.459032844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:00:52.459905 containerd[1951]: time="2025-02-13T19:00:52.459062064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:00:52.459905 containerd[1951]: time="2025-02-13T19:00:52.459089484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:00:52.459905 containerd[1951]: time="2025-02-13T19:00:52.459119388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:00:52.471018 containerd[1951]: time="2025-02-13T19:00:52.466048248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:00:52.471018 containerd[1951]: time="2025-02-13T19:00:52.466375980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:00:52.471018 containerd[1951]: time="2025-02-13T19:00:52.466463184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:00:52.471018 containerd[1951]: time="2025-02-13T19:00:52.466553904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:00:52.471018 containerd[1951]: time="2025-02-13T19:00:52.466588008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:00:52.471018 containerd[1951]: time="2025-02-13T19:00:52.467198028Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:00:52.471018 containerd[1951]: time="2025-02-13T19:00:52.468224256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:00:52.471018 containerd[1951]: time="2025-02-13T19:00:52.470239524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:00:52.471018 containerd[1951]: time="2025-02-13T19:00:52.470325864Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:00:52.471569 amazon-ssm-agent[2092]: 2025-02-13 19:00:52 INFO http_proxy: Feb 13 19:00:52.474284 containerd[1951]: time="2025-02-13T19:00:52.471673536Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:00:52.474284 containerd[1951]: time="2025-02-13T19:00:52.473231496Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:00:52.474284 containerd[1951]: time="2025-02-13T19:00:52.474221604Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:00:52.477164 containerd[1951]: time="2025-02-13T19:00:52.474530724Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:00:52.477164 containerd[1951]: time="2025-02-13T19:00:52.474567888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:00:52.477164 containerd[1951]: time="2025-02-13T19:00:52.475196340Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:00:52.478248 containerd[1951]: time="2025-02-13T19:00:52.475228956Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:00:52.478457 containerd[1951]: time="2025-02-13T19:00:52.478407072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:00:52.485359 containerd[1951]: time="2025-02-13T19:00:52.482494800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:00:52.485359 containerd[1951]: time="2025-02-13T19:00:52.482650320Z" level=info msg="Connect containerd service" Feb 13 19:00:52.485359 containerd[1951]: time="2025-02-13T19:00:52.483743568Z" level=info msg="using legacy CRI server" Feb 13 19:00:52.485359 containerd[1951]: time="2025-02-13T19:00:52.484807380Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:00:52.487885 containerd[1951]: time="2025-02-13T19:00:52.487745388Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:00:52.494173 containerd[1951]: time="2025-02-13T19:00:52.494055612Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:00:52.494442 containerd[1951]: time="2025-02-13T19:00:52.494373588Z" level=info msg="Start subscribing containerd event" Feb 13 19:00:52.494784 containerd[1951]: time="2025-02-13T19:00:52.494752092Z" level=info msg="Start recovering state" Feb 13 19:00:52.497801 containerd[1951]: time="2025-02-13T19:00:52.496260348Z" level=info msg="Start event monitor" Feb 13 19:00:52.497801 containerd[1951]: time="2025-02-13T19:00:52.496330428Z" level=info msg="Start snapshots syncer" Feb 13 19:00:52.497801 containerd[1951]: time="2025-02-13T19:00:52.496356072Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:00:52.497801 containerd[1951]: time="2025-02-13T19:00:52.496375524Z" level=info msg="Start streaming server" Feb 13 19:00:52.498701 containerd[1951]: time="2025-02-13T19:00:52.498420744Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:00:52.499513 containerd[1951]: time="2025-02-13T19:00:52.499471920Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:00:52.499722 containerd[1951]: time="2025-02-13T19:00:52.499696212Z" level=info msg="containerd successfully booted in 0.407802s" Feb 13 19:00:52.506462 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:00:52.571449 amazon-ssm-agent[2092]: 2025-02-13 19:00:52 INFO no_proxy: Feb 13 19:00:52.644189 sshd[2129]: Accepted publickey for core from 139.178.89.65 port 39944 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:00:52.651806 sshd-session[2129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:00:52.671166 amazon-ssm-agent[2092]: 2025-02-13 19:00:52 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:00:52.678454 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:00:52.692301 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:00:52.728117 systemd-logind[1936]: New session 1 of user core. Feb 13 19:00:52.745175 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:00:52.760806 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:00:52.771200 amazon-ssm-agent[2092]: 2025-02-13 19:00:52 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:00:52.778752 (systemd)[2166]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:00:52.791343 systemd-logind[1936]: New session c1 of user core. Feb 13 19:00:52.871189 amazon-ssm-agent[2092]: 2025-02-13 19:00:52 INFO Agent will take identity from EC2 Feb 13 19:00:52.976253 amazon-ssm-agent[2092]: 2025-02-13 19:00:52 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:00:53.079607 amazon-ssm-agent[2092]: 2025-02-13 19:00:52 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:00:53.179247 amazon-ssm-agent[2092]: 2025-02-13 19:00:52 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:00:53.196731 systemd[2166]: Queued start job for default target default.target. Feb 13 19:00:53.204945 systemd[2166]: Created slice app.slice - User Application Slice. Feb 13 19:00:53.205565 systemd[2166]: Reached target paths.target - Paths. Feb 13 19:00:53.205676 systemd[2166]: Reached target timers.target - Timers. Feb 13 19:00:53.214906 systemd[2166]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:00:53.244732 systemd[2166]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:00:53.244961 systemd[2166]: Reached target sockets.target - Sockets. Feb 13 19:00:53.245059 systemd[2166]: Reached target basic.target - Basic System. Feb 13 19:00:53.245166 systemd[2166]: Reached target default.target - Main User Target. Feb 13 19:00:53.245279 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:00:53.246008 systemd[2166]: Startup finished in 429ms. Feb 13 19:00:53.255254 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:00:53.281226 amazon-ssm-agent[2092]: 2025-02-13 19:00:52 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:00:53.389261 amazon-ssm-agent[2092]: 2025-02-13 19:00:52 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 19:00:53.439747 systemd[1]: Started sshd@1-172.31.27.65:22-139.178.89.65:39958.service - OpenSSH per-connection server daemon (139.178.89.65:39958). Feb 13 19:00:53.490192 amazon-ssm-agent[2092]: 2025-02-13 19:00:52 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:00:53.565172 tar[1943]: linux-arm64/LICENSE Feb 13 19:00:53.565172 tar[1943]: linux-arm64/README.md Feb 13 19:00:53.590034 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:00:53.590635 amazon-ssm-agent[2092]: 2025-02-13 19:00:52 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:00:53.691105 amazon-ssm-agent[2092]: 2025-02-13 19:00:52 INFO [Registrar] Starting registrar module Feb 13 19:00:53.697109 sshd[2179]: Accepted publickey for core from 139.178.89.65 port 39958 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:00:53.702355 sshd-session[2179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:00:53.719405 systemd-logind[1936]: New session 2 of user core. Feb 13 19:00:53.723454 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:00:53.792445 amazon-ssm-agent[2092]: 2025-02-13 19:00:52 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:00:53.866190 sshd[2184]: Connection closed by 139.178.89.65 port 39958 Feb 13 19:00:53.866102 sshd-session[2179]: pam_unix(sshd:session): session closed for user core Feb 13 19:00:53.877042 systemd[1]: sshd@1-172.31.27.65:22-139.178.89.65:39958.service: Deactivated successfully. Feb 13 19:00:53.882318 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:00:53.884111 systemd-logind[1936]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:00:53.887296 systemd-logind[1936]: Removed session 2. Feb 13 19:00:53.911257 amazon-ssm-agent[2092]: 2025-02-13 19:00:53 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:00:53.914306 systemd[1]: Started sshd@2-172.31.27.65:22-139.178.89.65:39974.service - OpenSSH per-connection server daemon (139.178.89.65:39974). Feb 13 19:00:53.951134 amazon-ssm-agent[2092]: 2025-02-13 19:00:53 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:00:53.951134 amazon-ssm-agent[2092]: 2025-02-13 19:00:53 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:00:53.951134 amazon-ssm-agent[2092]: 2025-02-13 19:00:53 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:00:54.011578 amazon-ssm-agent[2092]: 2025-02-13 19:00:53 INFO [CredentialRefresher] Next credential rotation will be in 30.791659263566668 minutes Feb 13 19:00:54.124211 sshd[2191]: Accepted publickey for core from 139.178.89.65 port 39974 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:00:54.127861 sshd-session[2191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:00:54.135920 systemd-logind[1936]: New session 3 of user core. Feb 13 19:00:54.140866 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:00:54.170589 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:00:54.174360 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:00:54.177559 systemd[1]: Startup finished in 1.077s (kernel) + 9.804s (initrd) + 8.635s (userspace) = 19.517s. Feb 13 19:00:54.182722 (kubelet)[2199]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:00:54.237872 ntpd[1928]: Listen normally on 6 eth0 [fe80::42e:dff:fe33:112d%2]:123 Feb 13 19:00:54.238374 ntpd[1928]: 13 Feb 19:00:54 ntpd[1928]: Listen normally on 6 eth0 [fe80::42e:dff:fe33:112d%2]:123 Feb 13 19:00:54.276398 sshd[2197]: Connection closed by 139.178.89.65 port 39974 Feb 13 19:00:54.278764 sshd-session[2191]: pam_unix(sshd:session): session closed for user core Feb 13 19:00:54.286333 systemd[1]: sshd@2-172.31.27.65:22-139.178.89.65:39974.service: Deactivated successfully. Feb 13 19:00:54.291944 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:00:54.294976 systemd-logind[1936]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:00:54.297879 systemd-logind[1936]: Removed session 3. Feb 13 19:00:54.978019 amazon-ssm-agent[2092]: 2025-02-13 19:00:54 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:00:55.081588 amazon-ssm-agent[2092]: 2025-02-13 19:00:54 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2215) started Feb 13 19:00:55.156250 kubelet[2199]: E0213 19:00:55.156088 2199 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:00:55.160893 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:00:55.162444 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:00:55.164516 systemd[1]: kubelet.service: Consumed 1.324s CPU time, 241.6M memory peak. Feb 13 19:00:55.181849 amazon-ssm-agent[2092]: 2025-02-13 19:00:54 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:00:58.570109 systemd-resolved[1866]: Clock change detected. Flushing caches. Feb 13 19:01:04.657751 systemd[1]: Started sshd@3-172.31.27.65:22-139.178.89.65:44788.service - OpenSSH per-connection server daemon (139.178.89.65:44788). Feb 13 19:01:04.835950 sshd[2227]: Accepted publickey for core from 139.178.89.65 port 44788 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:01:04.838348 sshd-session[2227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:04.846951 systemd-logind[1936]: New session 4 of user core. Feb 13 19:01:04.852566 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:01:04.978183 sshd[2229]: Connection closed by 139.178.89.65 port 44788 Feb 13 19:01:04.979116 sshd-session[2227]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:04.985009 systemd-logind[1936]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:01:04.986439 systemd[1]: sshd@3-172.31.27.65:22-139.178.89.65:44788.service: Deactivated successfully. Feb 13 19:01:04.989722 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:01:04.992108 systemd-logind[1936]: Removed session 4. Feb 13 19:01:05.024786 systemd[1]: Started sshd@4-172.31.27.65:22-139.178.89.65:44804.service - OpenSSH per-connection server daemon (139.178.89.65:44804). Feb 13 19:01:05.203048 sshd[2235]: Accepted publickey for core from 139.178.89.65 port 44804 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:01:05.205441 sshd-session[2235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:05.215649 systemd-logind[1936]: New session 5 of user core. Feb 13 19:01:05.223557 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:01:05.341696 sshd[2237]: Connection closed by 139.178.89.65 port 44804 Feb 13 19:01:05.342503 sshd-session[2235]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:05.348998 systemd[1]: sshd@4-172.31.27.65:22-139.178.89.65:44804.service: Deactivated successfully. Feb 13 19:01:05.352492 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:01:05.353767 systemd-logind[1936]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:01:05.355534 systemd-logind[1936]: Removed session 5. Feb 13 19:01:05.378838 systemd[1]: Started sshd@5-172.31.27.65:22-139.178.89.65:44806.service - OpenSSH per-connection server daemon (139.178.89.65:44806). Feb 13 19:01:05.536061 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:01:05.542696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:01:05.566469 sshd[2243]: Accepted publickey for core from 139.178.89.65 port 44806 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:01:05.569592 sshd-session[2243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:05.581414 systemd-logind[1936]: New session 6 of user core. Feb 13 19:01:05.585603 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:01:05.713676 sshd[2248]: Connection closed by 139.178.89.65 port 44806 Feb 13 19:01:05.715720 sshd-session[2243]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:05.724684 systemd[1]: sshd@5-172.31.27.65:22-139.178.89.65:44806.service: Deactivated successfully. Feb 13 19:01:05.729188 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:01:05.731039 systemd-logind[1936]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:01:05.734053 systemd-logind[1936]: Removed session 6. Feb 13 19:01:05.758964 systemd[1]: Started sshd@6-172.31.27.65:22-139.178.89.65:44812.service - OpenSSH per-connection server daemon (139.178.89.65:44812). Feb 13 19:01:05.846146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:01:05.863115 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:01:05.947247 kubelet[2261]: E0213 19:01:05.947087 2261 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:01:05.954831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:01:05.955157 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:01:05.956478 systemd[1]: kubelet.service: Consumed 296ms CPU time, 96.9M memory peak. Feb 13 19:01:05.966814 sshd[2254]: Accepted publickey for core from 139.178.89.65 port 44812 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:01:05.969257 sshd-session[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:05.978035 systemd-logind[1936]: New session 7 of user core. Feb 13 19:01:05.985549 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:01:06.106177 sudo[2270]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:01:06.107483 sudo[2270]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:01:06.122887 sudo[2270]: pam_unix(sudo:session): session closed for user root Feb 13 19:01:06.147885 sshd[2269]: Connection closed by 139.178.89.65 port 44812 Feb 13 19:01:06.147289 sshd-session[2254]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:06.154503 systemd-logind[1936]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:01:06.156007 systemd[1]: sshd@6-172.31.27.65:22-139.178.89.65:44812.service: Deactivated successfully. Feb 13 19:01:06.159190 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:01:06.161026 systemd-logind[1936]: Removed session 7. Feb 13 19:01:06.184817 systemd[1]: Started sshd@7-172.31.27.65:22-139.178.89.65:44828.service - OpenSSH per-connection server daemon (139.178.89.65:44828). Feb 13 19:01:06.372141 sshd[2276]: Accepted publickey for core from 139.178.89.65 port 44828 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:01:06.374645 sshd-session[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:06.383669 systemd-logind[1936]: New session 8 of user core. Feb 13 19:01:06.394560 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:01:06.498482 sudo[2280]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:01:06.499104 sudo[2280]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:01:06.504745 sudo[2280]: pam_unix(sudo:session): session closed for user root Feb 13 19:01:06.514596 sudo[2279]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:01:06.515216 sudo[2279]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:01:06.545590 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:01:06.590157 augenrules[2302]: No rules Feb 13 19:01:06.592250 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:01:06.592800 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:01:06.594924 sudo[2279]: pam_unix(sudo:session): session closed for user root Feb 13 19:01:06.618409 sshd[2278]: Connection closed by 139.178.89.65 port 44828 Feb 13 19:01:06.619218 sshd-session[2276]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:06.625131 systemd-logind[1936]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:01:06.627065 systemd[1]: sshd@7-172.31.27.65:22-139.178.89.65:44828.service: Deactivated successfully. Feb 13 19:01:06.630375 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:01:06.634074 systemd-logind[1936]: Removed session 8. Feb 13 19:01:06.659840 systemd[1]: Started sshd@8-172.31.27.65:22-139.178.89.65:44838.service - OpenSSH per-connection server daemon (139.178.89.65:44838). Feb 13 19:01:06.849997 sshd[2311]: Accepted publickey for core from 139.178.89.65 port 44838 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:01:06.852342 sshd-session[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:06.860973 systemd-logind[1936]: New session 9 of user core. Feb 13 19:01:06.867585 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:01:06.972732 sudo[2314]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:01:06.973696 sudo[2314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:01:07.644906 (dockerd)[2330]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:01:07.645148 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:01:07.988037 dockerd[2330]: time="2025-02-13T19:01:07.987940394Z" level=info msg="Starting up" Feb 13 19:01:08.119519 systemd[1]: var-lib-docker-metacopy\x2dcheck4061905867-merged.mount: Deactivated successfully. Feb 13 19:01:08.135011 dockerd[2330]: time="2025-02-13T19:01:08.134940875Z" level=info msg="Loading containers: start." Feb 13 19:01:08.377371 kernel: Initializing XFRM netlink socket Feb 13 19:01:08.409427 (udev-worker)[2353]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:01:08.496511 systemd-networkd[1865]: docker0: Link UP Feb 13 19:01:08.539436 dockerd[2330]: time="2025-02-13T19:01:08.539365657Z" level=info msg="Loading containers: done." Feb 13 19:01:08.560856 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1032405046-merged.mount: Deactivated successfully. Feb 13 19:01:08.570686 dockerd[2330]: time="2025-02-13T19:01:08.570633949Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:01:08.571328 dockerd[2330]: time="2025-02-13T19:01:08.570989653Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:01:08.571328 dockerd[2330]: time="2025-02-13T19:01:08.571226125Z" level=info msg="Daemon has completed initialization" Feb 13 19:01:08.635142 dockerd[2330]: time="2025-02-13T19:01:08.634997269Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:01:08.635268 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:01:09.810149 containerd[1951]: time="2025-02-13T19:01:09.809914551Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:01:10.430549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3554073787.mount: Deactivated successfully. Feb 13 19:01:11.931374 containerd[1951]: time="2025-02-13T19:01:11.930696426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:11.932943 containerd[1951]: time="2025-02-13T19:01:11.932874162Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865207" Feb 13 19:01:11.934950 containerd[1951]: time="2025-02-13T19:01:11.934869294Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:11.940545 containerd[1951]: time="2025-02-13T19:01:11.940467210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:11.943271 containerd[1951]: time="2025-02-13T19:01:11.942748338Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.132774903s" Feb 13 19:01:11.943271 containerd[1951]: time="2025-02-13T19:01:11.942805494Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 19:01:11.982626 containerd[1951]: time="2025-02-13T19:01:11.982564926Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:01:13.699852 containerd[1951]: time="2025-02-13T19:01:13.699793351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:13.701733 containerd[1951]: time="2025-02-13T19:01:13.701373007Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898594" Feb 13 19:01:13.702576 containerd[1951]: time="2025-02-13T19:01:13.702484339Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:13.707959 containerd[1951]: time="2025-02-13T19:01:13.707906719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:13.710474 containerd[1951]: time="2025-02-13T19:01:13.710275783Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.727650173s" Feb 13 19:01:13.710474 containerd[1951]: time="2025-02-13T19:01:13.710347255Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 19:01:13.751395 containerd[1951]: time="2025-02-13T19:01:13.751282111Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:01:15.100393 containerd[1951]: time="2025-02-13T19:01:15.099892158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:15.102760 containerd[1951]: time="2025-02-13T19:01:15.102671994Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164934" Feb 13 19:01:15.104195 containerd[1951]: time="2025-02-13T19:01:15.104117802Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:15.109919 containerd[1951]: time="2025-02-13T19:01:15.109814670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:15.112366 containerd[1951]: time="2025-02-13T19:01:15.112213746Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.360851451s" Feb 13 19:01:15.112366 containerd[1951]: time="2025-02-13T19:01:15.112268910Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 19:01:15.157451 containerd[1951]: time="2025-02-13T19:01:15.157399338Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:01:15.967605 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:01:15.977704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:01:16.338208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:01:16.348167 (kubelet)[2611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:01:16.468130 kubelet[2611]: E0213 19:01:16.465975 2611 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:01:16.473850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:01:16.474231 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:01:16.474962 systemd[1]: kubelet.service: Consumed 336ms CPU time, 96.1M memory peak. Feb 13 19:01:16.562940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1887069154.mount: Deactivated successfully. Feb 13 19:01:17.186375 containerd[1951]: time="2025-02-13T19:01:17.185467520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:17.188533 containerd[1951]: time="2025-02-13T19:01:17.188446988Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663370" Feb 13 19:01:17.191342 containerd[1951]: time="2025-02-13T19:01:17.191236820Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:17.195737 containerd[1951]: time="2025-02-13T19:01:17.195641816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:17.197372 containerd[1951]: time="2025-02-13T19:01:17.197061704Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 2.03960281s" Feb 13 19:01:17.197372 containerd[1951]: time="2025-02-13T19:01:17.197122904Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:01:17.240064 containerd[1951]: time="2025-02-13T19:01:17.239995268Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:01:17.851216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1326708678.mount: Deactivated successfully. Feb 13 19:01:19.106677 containerd[1951]: time="2025-02-13T19:01:19.106596826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:19.109260 containerd[1951]: time="2025-02-13T19:01:19.109173562Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 19:01:19.111505 containerd[1951]: time="2025-02-13T19:01:19.111416098Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:19.119342 containerd[1951]: time="2025-02-13T19:01:19.119244742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:19.122489 containerd[1951]: time="2025-02-13T19:01:19.121723090Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.881655894s" Feb 13 19:01:19.122489 containerd[1951]: time="2025-02-13T19:01:19.121784842Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:01:19.163782 containerd[1951]: time="2025-02-13T19:01:19.163715470Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:01:19.665173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2805039677.mount: Deactivated successfully. Feb 13 19:01:19.680373 containerd[1951]: time="2025-02-13T19:01:19.679946820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:19.682736 containerd[1951]: time="2025-02-13T19:01:19.682657848Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Feb 13 19:01:19.684726 containerd[1951]: time="2025-02-13T19:01:19.684654672Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:19.689866 containerd[1951]: time="2025-02-13T19:01:19.689804436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:19.691717 containerd[1951]: time="2025-02-13T19:01:19.691549716Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 527.768582ms" Feb 13 19:01:19.691717 containerd[1951]: time="2025-02-13T19:01:19.691596192Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 19:01:19.732588 containerd[1951]: time="2025-02-13T19:01:19.732500689Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:01:20.330769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1608775858.mount: Deactivated successfully. Feb 13 19:01:22.431994 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:01:22.791041 containerd[1951]: time="2025-02-13T19:01:22.790943416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:22.793449 containerd[1951]: time="2025-02-13T19:01:22.793354072Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Feb 13 19:01:22.795921 containerd[1951]: time="2025-02-13T19:01:22.795795184Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:22.803077 containerd[1951]: time="2025-02-13T19:01:22.802958788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:22.806168 containerd[1951]: time="2025-02-13T19:01:22.805872304Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.073302891s" Feb 13 19:01:22.806168 containerd[1951]: time="2025-02-13T19:01:22.805970164Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 19:01:26.717557 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 19:01:26.728878 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:01:27.073721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:01:27.084992 (kubelet)[2801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:01:27.181198 kubelet[2801]: E0213 19:01:27.181132 2801 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:01:27.186809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:01:27.187464 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:01:27.188576 systemd[1]: kubelet.service: Consumed 309ms CPU time, 94.4M memory peak. Feb 13 19:01:30.532405 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:01:30.532759 systemd[1]: kubelet.service: Consumed 309ms CPU time, 94.4M memory peak. Feb 13 19:01:30.544841 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:01:30.593672 systemd[1]: Reload requested from client PID 2816 ('systemctl') (unit session-9.scope)... Feb 13 19:01:30.593701 systemd[1]: Reloading... Feb 13 19:01:30.873380 zram_generator::config[2864]: No configuration found. Feb 13 19:01:31.123507 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:01:31.357007 systemd[1]: Reloading finished in 762 ms. Feb 13 19:01:31.459617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:01:31.468694 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:01:31.471700 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:01:31.472449 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:01:31.473476 systemd[1]: kubelet.service: Consumed 224ms CPU time, 82M memory peak. Feb 13 19:01:31.487873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:01:31.777857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:01:31.793337 (kubelet)[2926]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:01:31.873362 kubelet[2926]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:01:31.873362 kubelet[2926]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:01:31.873362 kubelet[2926]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:01:31.873362 kubelet[2926]: I0213 19:01:31.872606 2926 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:01:34.113087 kubelet[2926]: I0213 19:01:34.113039 2926 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:01:34.114361 kubelet[2926]: I0213 19:01:34.113679 2926 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:01:34.114361 kubelet[2926]: I0213 19:01:34.114028 2926 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:01:34.145563 kubelet[2926]: E0213 19:01:34.145503 2926 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.27.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.27.65:6443: connect: connection refused Feb 13 19:01:34.146102 kubelet[2926]: I0213 19:01:34.145894 2926 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:01:34.160930 kubelet[2926]: I0213 19:01:34.160885 2926 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:01:34.163748 kubelet[2926]: I0213 19:01:34.163644 2926 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:01:34.164051 kubelet[2926]: I0213 19:01:34.163739 2926 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-65","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:01:34.164244 kubelet[2926]: I0213 19:01:34.164071 2926 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:01:34.164244 kubelet[2926]: I0213 19:01:34.164093 2926 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:01:34.164416 kubelet[2926]: I0213 19:01:34.164373 2926 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:01:34.167161 kubelet[2926]: I0213 19:01:34.166012 2926 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:01:34.167161 kubelet[2926]: I0213 19:01:34.166065 2926 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:01:34.167161 kubelet[2926]: I0213 19:01:34.166147 2926 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:01:34.167161 kubelet[2926]: I0213 19:01:34.166168 2926 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:01:34.168228 kubelet[2926]: I0213 19:01:34.168176 2926 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:01:34.168781 kubelet[2926]: I0213 19:01:34.168756 2926 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:01:34.168973 kubelet[2926]: W0213 19:01:34.168952 2926 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:01:34.170176 kubelet[2926]: I0213 19:01:34.170137 2926 server.go:1264] "Started kubelet" Feb 13 19:01:34.170647 kubelet[2926]: W0213 19:01:34.170584 2926 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.27.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.65:6443: connect: connection refused Feb 13 19:01:34.171124 kubelet[2926]: E0213 19:01:34.170769 2926 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.27.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.65:6443: connect: connection refused Feb 13 19:01:34.171124 kubelet[2926]: W0213 19:01:34.170892 2926 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.27.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-65&limit=500&resourceVersion=0": dial tcp 172.31.27.65:6443: connect: connection refused Feb 13 19:01:34.171124 kubelet[2926]: E0213 19:01:34.170947 2926 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.27.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-65&limit=500&resourceVersion=0": dial tcp 172.31.27.65:6443: connect: connection refused Feb 13 19:01:34.180605 kubelet[2926]: I0213 19:01:34.180395 2926 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:01:34.182614 kubelet[2926]: E0213 19:01:34.181445 2926 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.65:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.65:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-65.1823d9beb34991e0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-65,UID:ip-172-31-27-65,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-65,},FirstTimestamp:2025-02-13 19:01:34.170100192 +0000 UTC m=+2.370244848,LastTimestamp:2025-02-13 19:01:34.170100192 +0000 UTC m=+2.370244848,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-65,}" Feb 13 19:01:34.183289 kubelet[2926]: I0213 19:01:34.183246 2926 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:01:34.190290 kubelet[2926]: I0213 19:01:34.188228 2926 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:01:34.190290 kubelet[2926]: I0213 19:01:34.188751 2926 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:01:34.192774 kubelet[2926]: I0213 19:01:34.192725 2926 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:01:34.198272 kubelet[2926]: I0213 19:01:34.198221 2926 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:01:34.203645 kubelet[2926]: I0213 19:01:34.203609 2926 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:01:34.205907 kubelet[2926]: E0213 19:01:34.205856 2926 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:01:34.206347 kubelet[2926]: I0213 19:01:34.206291 2926 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:01:34.207021 kubelet[2926]: W0213 19:01:34.206971 2926 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.27.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.65:6443: connect: connection refused Feb 13 19:01:34.207194 kubelet[2926]: E0213 19:01:34.207169 2926 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.27.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.65:6443: connect: connection refused Feb 13 19:01:34.207511 kubelet[2926]: E0213 19:01:34.207454 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-65?timeout=10s\": dial tcp 172.31.27.65:6443: connect: connection refused" interval="200ms" Feb 13 19:01:34.210755 kubelet[2926]: I0213 19:01:34.210714 2926 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:01:34.210970 kubelet[2926]: I0213 19:01:34.210948 2926 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:01:34.211204 kubelet[2926]: I0213 19:01:34.211168 2926 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:01:34.224008 kubelet[2926]: I0213 19:01:34.223930 2926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:01:34.227654 kubelet[2926]: I0213 19:01:34.227578 2926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:01:34.227824 kubelet[2926]: I0213 19:01:34.227699 2926 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:01:34.227824 kubelet[2926]: I0213 19:01:34.227736 2926 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:01:34.227964 kubelet[2926]: E0213 19:01:34.227813 2926 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:01:34.238490 kubelet[2926]: W0213 19:01:34.238045 2926 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.27.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.65:6443: connect: connection refused Feb 13 19:01:34.238490 kubelet[2926]: E0213 19:01:34.238143 2926 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.27.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.65:6443: connect: connection refused Feb 13 19:01:34.252891 kubelet[2926]: I0213 19:01:34.252849 2926 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:01:34.252891 kubelet[2926]: I0213 19:01:34.252885 2926 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:01:34.253111 kubelet[2926]: I0213 19:01:34.252918 2926 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:01:34.256325 kubelet[2926]: I0213 19:01:34.256244 2926 policy_none.go:49] "None policy: Start" Feb 13 19:01:34.257758 kubelet[2926]: I0213 19:01:34.257716 2926 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:01:34.257758 kubelet[2926]: I0213 19:01:34.257765 2926 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:01:34.270178 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:01:34.290609 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:01:34.298468 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:01:34.302244 kubelet[2926]: I0213 19:01:34.301654 2926 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-65" Feb 13 19:01:34.303058 kubelet[2926]: E0213 19:01:34.303008 2926 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.65:6443/api/v1/nodes\": dial tcp 172.31.27.65:6443: connect: connection refused" node="ip-172-31-27-65" Feb 13 19:01:34.308174 kubelet[2926]: I0213 19:01:34.308121 2926 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:01:34.308535 kubelet[2926]: I0213 19:01:34.308474 2926 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:01:34.308663 kubelet[2926]: I0213 19:01:34.308649 2926 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:01:34.312938 kubelet[2926]: E0213 19:01:34.312870 2926 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-27-65\" not found" Feb 13 19:01:34.328098 kubelet[2926]: I0213 19:01:34.328030 2926 topology_manager.go:215] "Topology Admit Handler" podUID="19149eb4bd42f11c39a6db30fb406dd6" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-27-65" Feb 13 19:01:34.330600 kubelet[2926]: I0213 19:01:34.330538 2926 topology_manager.go:215] "Topology Admit Handler" podUID="55e3a529ccdbce46ddb72041b364e169" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-27-65" Feb 13 19:01:34.333594 kubelet[2926]: I0213 19:01:34.333003 2926 topology_manager.go:215] "Topology Admit Handler" podUID="ce4edff067d340527e26a3eb29b0c19d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-27-65" Feb 13 19:01:34.347545 systemd[1]: Created slice kubepods-burstable-pod19149eb4bd42f11c39a6db30fb406dd6.slice - libcontainer container kubepods-burstable-pod19149eb4bd42f11c39a6db30fb406dd6.slice. Feb 13 19:01:34.370601 systemd[1]: Created slice kubepods-burstable-pod55e3a529ccdbce46ddb72041b364e169.slice - libcontainer container kubepods-burstable-pod55e3a529ccdbce46ddb72041b364e169.slice. Feb 13 19:01:34.388878 systemd[1]: Created slice kubepods-burstable-podce4edff067d340527e26a3eb29b0c19d.slice - libcontainer container kubepods-burstable-podce4edff067d340527e26a3eb29b0c19d.slice. Feb 13 19:01:34.407885 kubelet[2926]: I0213 19:01:34.407830 2926 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55e3a529ccdbce46ddb72041b364e169-ca-certs\") pod \"kube-apiserver-ip-172-31-27-65\" (UID: \"55e3a529ccdbce46ddb72041b364e169\") " pod="kube-system/kube-apiserver-ip-172-31-27-65" Feb 13 19:01:34.408056 kubelet[2926]: I0213 19:01:34.407894 2926 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55e3a529ccdbce46ddb72041b364e169-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-65\" (UID: \"55e3a529ccdbce46ddb72041b364e169\") " pod="kube-system/kube-apiserver-ip-172-31-27-65" Feb 13 19:01:34.408056 kubelet[2926]: I0213 19:01:34.407940 2926 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55e3a529ccdbce46ddb72041b364e169-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-65\" (UID: \"55e3a529ccdbce46ddb72041b364e169\") " pod="kube-system/kube-apiserver-ip-172-31-27-65" Feb 13 19:01:34.408056 kubelet[2926]: I0213 19:01:34.407979 2926 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce4edff067d340527e26a3eb29b0c19d-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-65\" (UID: \"ce4edff067d340527e26a3eb29b0c19d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-65" Feb 13 19:01:34.408056 kubelet[2926]: I0213 19:01:34.408013 2926 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce4edff067d340527e26a3eb29b0c19d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-65\" (UID: \"ce4edff067d340527e26a3eb29b0c19d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-65" Feb 13 19:01:34.408056 kubelet[2926]: I0213 19:01:34.408050 2926 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce4edff067d340527e26a3eb29b0c19d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-65\" (UID: \"ce4edff067d340527e26a3eb29b0c19d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-65" Feb 13 19:01:34.408385 kubelet[2926]: I0213 19:01:34.408088 2926 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19149eb4bd42f11c39a6db30fb406dd6-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-65\" (UID: \"19149eb4bd42f11c39a6db30fb406dd6\") " pod="kube-system/kube-scheduler-ip-172-31-27-65" Feb 13 19:01:34.408385 kubelet[2926]: I0213 19:01:34.408122 2926 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce4edff067d340527e26a3eb29b0c19d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-65\" (UID: \"ce4edff067d340527e26a3eb29b0c19d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-65" Feb 13 19:01:34.408385 kubelet[2926]: I0213 19:01:34.408159 2926 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce4edff067d340527e26a3eb29b0c19d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-65\" (UID: \"ce4edff067d340527e26a3eb29b0c19d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-65" Feb 13 19:01:34.408875 kubelet[2926]: E0213 19:01:34.408810 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-65?timeout=10s\": dial tcp 172.31.27.65:6443: connect: connection refused" interval="400ms" Feb 13 19:01:34.506444 kubelet[2926]: I0213 19:01:34.506147 2926 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-65" Feb 13 19:01:34.506775 kubelet[2926]: E0213 19:01:34.506624 2926 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.65:6443/api/v1/nodes\": dial tcp 172.31.27.65:6443: connect: connection refused" node="ip-172-31-27-65" Feb 13 19:01:34.666035 containerd[1951]: time="2025-02-13T19:01:34.665880147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-65,Uid:19149eb4bd42f11c39a6db30fb406dd6,Namespace:kube-system,Attempt:0,}" Feb 13 19:01:34.684338 containerd[1951]: time="2025-02-13T19:01:34.684182583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-65,Uid:55e3a529ccdbce46ddb72041b364e169,Namespace:kube-system,Attempt:0,}" Feb 13 19:01:34.693993 containerd[1951]: time="2025-02-13T19:01:34.693916851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-65,Uid:ce4edff067d340527e26a3eb29b0c19d,Namespace:kube-system,Attempt:0,}" Feb 13 19:01:34.809916 kubelet[2926]: E0213 19:01:34.809791 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-65?timeout=10s\": dial tcp 172.31.27.65:6443: connect: connection refused" interval="800ms" Feb 13 19:01:34.910360 kubelet[2926]: I0213 19:01:34.910192 2926 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-65" Feb 13 19:01:34.910735 kubelet[2926]: E0213 19:01:34.910688 2926 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.65:6443/api/v1/nodes\": dial tcp 172.31.27.65:6443: connect: connection refused" node="ip-172-31-27-65" Feb 13 19:01:35.009297 kubelet[2926]: W0213 19:01:35.009202 2926 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.27.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-65&limit=500&resourceVersion=0": dial tcp 172.31.27.65:6443: connect: connection refused Feb 13 19:01:35.009493 kubelet[2926]: E0213 19:01:35.009323 2926 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.27.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-65&limit=500&resourceVersion=0": dial tcp 172.31.27.65:6443: connect: connection refused Feb 13 19:01:35.117585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount346388333.mount: Deactivated successfully. Feb 13 19:01:35.133392 containerd[1951]: time="2025-02-13T19:01:35.133276501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:01:35.137841 containerd[1951]: time="2025-02-13T19:01:35.137489377Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 19:01:35.142190 containerd[1951]: time="2025-02-13T19:01:35.142098517Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:01:35.145267 containerd[1951]: time="2025-02-13T19:01:35.145146865Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:01:35.148355 containerd[1951]: time="2025-02-13T19:01:35.146619829Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:01:35.155000 kubelet[2926]: W0213 19:01:35.154940 2926 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.27.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.65:6443: connect: connection refused Feb 13 19:01:35.156924 kubelet[2926]: E0213 19:01:35.156866 2926 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.27.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.65:6443: connect: connection refused Feb 13 19:01:35.159381 containerd[1951]: time="2025-02-13T19:01:35.159261037Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:01:35.162063 containerd[1951]: time="2025-02-13T19:01:35.161967301Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:01:35.164505 containerd[1951]: time="2025-02-13T19:01:35.164428753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:01:35.170148 containerd[1951]: time="2025-02-13T19:01:35.170080837Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 485.555042ms" Feb 13 19:01:35.173272 containerd[1951]: time="2025-02-13T19:01:35.173194897Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 507.193886ms" Feb 13 19:01:35.174229 kubelet[2926]: W0213 19:01:35.174122 2926 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.27.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.65:6443: connect: connection refused Feb 13 19:01:35.174229 kubelet[2926]: E0213 19:01:35.174230 2926 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.27.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.65:6443: connect: connection refused Feb 13 19:01:35.181889 containerd[1951]: time="2025-02-13T19:01:35.181582897Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 487.557482ms" Feb 13 19:01:35.241034 kubelet[2926]: W0213 19:01:35.240962 2926 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.27.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.65:6443: connect: connection refused Feb 13 19:01:35.241167 kubelet[2926]: E0213 19:01:35.241038 2926 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.27.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.65:6443: connect: connection refused Feb 13 19:01:35.401898 containerd[1951]: time="2025-02-13T19:01:35.400632350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:01:35.401898 containerd[1951]: time="2025-02-13T19:01:35.400827278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:01:35.401898 containerd[1951]: time="2025-02-13T19:01:35.400887614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:35.402502 containerd[1951]: time="2025-02-13T19:01:35.402051686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:35.411409 containerd[1951]: time="2025-02-13T19:01:35.410919050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:01:35.411409 containerd[1951]: time="2025-02-13T19:01:35.411048206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:01:35.411409 containerd[1951]: time="2025-02-13T19:01:35.411087494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:35.412063 containerd[1951]: time="2025-02-13T19:01:35.411884870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:35.418399 containerd[1951]: time="2025-02-13T19:01:35.418229391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:01:35.418711 containerd[1951]: time="2025-02-13T19:01:35.418350015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:01:35.418711 containerd[1951]: time="2025-02-13T19:01:35.418389519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:35.418711 containerd[1951]: time="2025-02-13T19:01:35.418556103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:35.440665 systemd[1]: Started cri-containerd-eb9c8b11e78a4d2f8fa50d9c4dab48bddec42fab840fd50d73ad31adf6b60936.scope - libcontainer container eb9c8b11e78a4d2f8fa50d9c4dab48bddec42fab840fd50d73ad31adf6b60936. Feb 13 19:01:35.478631 systemd[1]: Started cri-containerd-24666f5cada685e4e3999d0dbebaa20d882675081c5f7f979c7b39cef496c0c8.scope - libcontainer container 24666f5cada685e4e3999d0dbebaa20d882675081c5f7f979c7b39cef496c0c8. Feb 13 19:01:35.490518 systemd[1]: Started cri-containerd-8dd574c8b14af2c6fa0d115ddf2f24f85e02c71a943dfad8be5a0955386c7d63.scope - libcontainer container 8dd574c8b14af2c6fa0d115ddf2f24f85e02c71a943dfad8be5a0955386c7d63. Feb 13 19:01:35.592495 containerd[1951]: time="2025-02-13T19:01:35.592284015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-65,Uid:19149eb4bd42f11c39a6db30fb406dd6,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb9c8b11e78a4d2f8fa50d9c4dab48bddec42fab840fd50d73ad31adf6b60936\"" Feb 13 19:01:35.603012 containerd[1951]: time="2025-02-13T19:01:35.602942223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-65,Uid:55e3a529ccdbce46ddb72041b364e169,Namespace:kube-system,Attempt:0,} returns sandbox id \"24666f5cada685e4e3999d0dbebaa20d882675081c5f7f979c7b39cef496c0c8\"" Feb 13 19:01:35.612544 containerd[1951]: time="2025-02-13T19:01:35.612470139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-65,Uid:ce4edff067d340527e26a3eb29b0c19d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8dd574c8b14af2c6fa0d115ddf2f24f85e02c71a943dfad8be5a0955386c7d63\"" Feb 13 19:01:35.615386 kubelet[2926]: E0213 19:01:35.614066 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-65?timeout=10s\": dial tcp 172.31.27.65:6443: connect: connection refused" interval="1.6s" Feb 13 19:01:35.615554 containerd[1951]: time="2025-02-13T19:01:35.615122356Z" level=info msg="CreateContainer within sandbox \"eb9c8b11e78a4d2f8fa50d9c4dab48bddec42fab840fd50d73ad31adf6b60936\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:01:35.622812 containerd[1951]: time="2025-02-13T19:01:35.622749160Z" level=info msg="CreateContainer within sandbox \"24666f5cada685e4e3999d0dbebaa20d882675081c5f7f979c7b39cef496c0c8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:01:35.629534 containerd[1951]: time="2025-02-13T19:01:35.629282728Z" level=info msg="CreateContainer within sandbox \"8dd574c8b14af2c6fa0d115ddf2f24f85e02c71a943dfad8be5a0955386c7d63\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:01:35.671164 containerd[1951]: time="2025-02-13T19:01:35.671021680Z" level=info msg="CreateContainer within sandbox \"eb9c8b11e78a4d2f8fa50d9c4dab48bddec42fab840fd50d73ad31adf6b60936\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"44c63bede9b79415428eb350fbce68bff53d4d274fef9555a735c1ac28ae88bb\"" Feb 13 19:01:35.674482 containerd[1951]: time="2025-02-13T19:01:35.674394892Z" level=info msg="StartContainer for \"44c63bede9b79415428eb350fbce68bff53d4d274fef9555a735c1ac28ae88bb\"" Feb 13 19:01:35.680598 containerd[1951]: time="2025-02-13T19:01:35.680520472Z" level=info msg="CreateContainer within sandbox \"8dd574c8b14af2c6fa0d115ddf2f24f85e02c71a943dfad8be5a0955386c7d63\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c71ef8ca289712deab225874d3daf5b3650d188825c22c39b09e89f1881b41c6\"" Feb 13 19:01:35.683736 containerd[1951]: time="2025-02-13T19:01:35.683552548Z" level=info msg="StartContainer for \"c71ef8ca289712deab225874d3daf5b3650d188825c22c39b09e89f1881b41c6\"" Feb 13 19:01:35.685924 containerd[1951]: time="2025-02-13T19:01:35.685836748Z" level=info msg="CreateContainer within sandbox \"24666f5cada685e4e3999d0dbebaa20d882675081c5f7f979c7b39cef496c0c8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5f6787cddd795064219cb42798d2cb31b3a6bcf4d451f5b5f8ca1f41c7d56b99\"" Feb 13 19:01:35.687163 containerd[1951]: time="2025-02-13T19:01:35.687103048Z" level=info msg="StartContainer for \"5f6787cddd795064219cb42798d2cb31b3a6bcf4d451f5b5f8ca1f41c7d56b99\"" Feb 13 19:01:35.717203 kubelet[2926]: I0213 19:01:35.715131 2926 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-65" Feb 13 19:01:35.717864 kubelet[2926]: E0213 19:01:35.717782 2926 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.65:6443/api/v1/nodes\": dial tcp 172.31.27.65:6443: connect: connection refused" node="ip-172-31-27-65" Feb 13 19:01:35.745815 systemd[1]: Started cri-containerd-44c63bede9b79415428eb350fbce68bff53d4d274fef9555a735c1ac28ae88bb.scope - libcontainer container 44c63bede9b79415428eb350fbce68bff53d4d274fef9555a735c1ac28ae88bb. Feb 13 19:01:35.779919 systemd[1]: Started cri-containerd-c71ef8ca289712deab225874d3daf5b3650d188825c22c39b09e89f1881b41c6.scope - libcontainer container c71ef8ca289712deab225874d3daf5b3650d188825c22c39b09e89f1881b41c6. Feb 13 19:01:35.802662 systemd[1]: Started cri-containerd-5f6787cddd795064219cb42798d2cb31b3a6bcf4d451f5b5f8ca1f41c7d56b99.scope - libcontainer container 5f6787cddd795064219cb42798d2cb31b3a6bcf4d451f5b5f8ca1f41c7d56b99. Feb 13 19:01:35.914564 containerd[1951]: time="2025-02-13T19:01:35.914369333Z" level=info msg="StartContainer for \"44c63bede9b79415428eb350fbce68bff53d4d274fef9555a735c1ac28ae88bb\" returns successfully" Feb 13 19:01:35.922189 containerd[1951]: time="2025-02-13T19:01:35.921505589Z" level=info msg="StartContainer for \"c71ef8ca289712deab225874d3daf5b3650d188825c22c39b09e89f1881b41c6\" returns successfully" Feb 13 19:01:35.943378 containerd[1951]: time="2025-02-13T19:01:35.943265069Z" level=info msg="StartContainer for \"5f6787cddd795064219cb42798d2cb31b3a6bcf4d451f5b5f8ca1f41c7d56b99\" returns successfully" Feb 13 19:01:36.995334 update_engine[1937]: I20250213 19:01:36.993350 1937 update_attempter.cc:509] Updating boot flags... Feb 13 19:01:37.109437 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3217) Feb 13 19:01:37.328693 kubelet[2926]: I0213 19:01:37.327292 2926 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-65" Feb 13 19:01:37.646634 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3221) Feb 13 19:01:39.966772 kubelet[2926]: E0213 19:01:39.966692 2926 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-27-65\" not found" node="ip-172-31-27-65" Feb 13 19:01:40.094148 kubelet[2926]: I0213 19:01:40.093786 2926 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-27-65" Feb 13 19:01:40.152644 kubelet[2926]: E0213 19:01:40.152213 2926 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-27-65.1823d9beb34991e0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-65,UID:ip-172-31-27-65,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-65,},FirstTimestamp:2025-02-13 19:01:34.170100192 +0000 UTC m=+2.370244848,LastTimestamp:2025-02-13 19:01:34.170100192 +0000 UTC m=+2.370244848,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-65,}" Feb 13 19:01:40.172472 kubelet[2926]: I0213 19:01:40.172401 2926 apiserver.go:52] "Watching apiserver" Feb 13 19:01:40.204614 kubelet[2926]: I0213 19:01:40.204528 2926 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:01:42.250245 systemd[1]: Reload requested from client PID 3389 ('systemctl') (unit session-9.scope)... Feb 13 19:01:42.250270 systemd[1]: Reloading... Feb 13 19:01:42.462451 zram_generator::config[3443]: No configuration found. Feb 13 19:01:42.719544 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:01:42.985809 systemd[1]: Reloading finished in 734 ms. Feb 13 19:01:43.040858 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:01:43.050598 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:01:43.051069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:01:43.051168 systemd[1]: kubelet.service: Consumed 3.155s CPU time, 116.1M memory peak. Feb 13 19:01:43.062714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:01:43.398620 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:01:43.409037 (kubelet)[3494]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:01:43.515980 kubelet[3494]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:01:43.515980 kubelet[3494]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:01:43.515980 kubelet[3494]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:01:43.515980 kubelet[3494]: I0213 19:01:43.515671 3494 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:01:43.527467 kubelet[3494]: I0213 19:01:43.527011 3494 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:01:43.527467 kubelet[3494]: I0213 19:01:43.527050 3494 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:01:43.527690 kubelet[3494]: I0213 19:01:43.527494 3494 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:01:43.530875 kubelet[3494]: I0213 19:01:43.530811 3494 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:01:43.533334 kubelet[3494]: I0213 19:01:43.533255 3494 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:01:43.557592 kubelet[3494]: I0213 19:01:43.557527 3494 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:01:43.558694 kubelet[3494]: I0213 19:01:43.557945 3494 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:01:43.558694 kubelet[3494]: I0213 19:01:43.558051 3494 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-65","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:01:43.558694 kubelet[3494]: I0213 19:01:43.558459 3494 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:01:43.558694 kubelet[3494]: I0213 19:01:43.558480 3494 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:01:43.558694 kubelet[3494]: I0213 19:01:43.558584 3494 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:01:43.560940 kubelet[3494]: I0213 19:01:43.558856 3494 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:01:43.560940 kubelet[3494]: I0213 19:01:43.558882 3494 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:01:43.560940 kubelet[3494]: I0213 19:01:43.558968 3494 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:01:43.560940 kubelet[3494]: I0213 19:01:43.559038 3494 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:01:43.565391 kubelet[3494]: I0213 19:01:43.565165 3494 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:01:43.566346 kubelet[3494]: I0213 19:01:43.565785 3494 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:01:43.566662 kubelet[3494]: I0213 19:01:43.566640 3494 server.go:1264] "Started kubelet" Feb 13 19:01:43.574360 kubelet[3494]: I0213 19:01:43.571873 3494 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:01:43.581619 sudo[3508]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:01:43.582905 sudo[3508]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:01:43.589502 kubelet[3494]: I0213 19:01:43.586551 3494 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:01:43.592076 kubelet[3494]: I0213 19:01:43.592034 3494 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:01:43.610093 kubelet[3494]: I0213 19:01:43.606828 3494 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:01:43.610930 kubelet[3494]: I0213 19:01:43.587934 3494 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:01:43.611253 kubelet[3494]: I0213 19:01:43.611216 3494 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:01:43.612626 kubelet[3494]: I0213 19:01:43.611493 3494 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:01:43.613093 kubelet[3494]: I0213 19:01:43.612699 3494 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:01:43.671891 kubelet[3494]: I0213 19:01:43.671742 3494 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:01:43.672466 kubelet[3494]: I0213 19:01:43.672413 3494 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:01:43.700617 kubelet[3494]: E0213 19:01:43.699766 3494 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:01:43.705070 kubelet[3494]: I0213 19:01:43.703927 3494 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:01:43.713447 kubelet[3494]: I0213 19:01:43.713387 3494 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:01:43.715576 kubelet[3494]: I0213 19:01:43.715522 3494 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:01:43.715717 kubelet[3494]: I0213 19:01:43.715595 3494 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:01:43.715717 kubelet[3494]: I0213 19:01:43.715627 3494 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:01:43.715717 kubelet[3494]: E0213 19:01:43.715695 3494 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:01:43.717429 kubelet[3494]: E0213 19:01:43.716252 3494 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Feb 13 19:01:43.740332 kubelet[3494]: I0213 19:01:43.739580 3494 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-65" Feb 13 19:01:43.779237 kubelet[3494]: I0213 19:01:43.778402 3494 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-27-65" Feb 13 19:01:43.784343 kubelet[3494]: I0213 19:01:43.784267 3494 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-27-65" Feb 13 19:01:43.817655 kubelet[3494]: E0213 19:01:43.817608 3494 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:01:43.907025 kubelet[3494]: I0213 19:01:43.906978 3494 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:01:43.907025 kubelet[3494]: I0213 19:01:43.907010 3494 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:01:43.907232 kubelet[3494]: I0213 19:01:43.907045 3494 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:01:43.907489 kubelet[3494]: I0213 19:01:43.907282 3494 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:01:43.907489 kubelet[3494]: I0213 19:01:43.907382 3494 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:01:43.907489 kubelet[3494]: I0213 19:01:43.907423 3494 policy_none.go:49] "None policy: Start" Feb 13 19:01:43.909861 kubelet[3494]: I0213 19:01:43.909805 3494 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:01:43.909981 kubelet[3494]: I0213 19:01:43.909870 3494 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:01:43.911212 kubelet[3494]: I0213 19:01:43.910252 3494 state_mem.go:75] "Updated machine memory state" Feb 13 19:01:43.921743 kubelet[3494]: I0213 19:01:43.921152 3494 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:01:43.925663 kubelet[3494]: I0213 19:01:43.924393 3494 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:01:43.925663 kubelet[3494]: I0213 19:01:43.925064 3494 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:01:44.020328 kubelet[3494]: I0213 19:01:44.018490 3494 topology_manager.go:215] "Topology Admit Handler" podUID="55e3a529ccdbce46ddb72041b364e169" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-27-65" Feb 13 19:01:44.020328 kubelet[3494]: I0213 19:01:44.018674 3494 topology_manager.go:215] "Topology Admit Handler" podUID="ce4edff067d340527e26a3eb29b0c19d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-27-65" Feb 13 19:01:44.020328 kubelet[3494]: I0213 19:01:44.018756 3494 topology_manager.go:215] "Topology Admit Handler" podUID="19149eb4bd42f11c39a6db30fb406dd6" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-27-65" Feb 13 19:01:44.116187 kubelet[3494]: I0213 19:01:44.116003 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55e3a529ccdbce46ddb72041b364e169-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-65\" (UID: \"55e3a529ccdbce46ddb72041b364e169\") " pod="kube-system/kube-apiserver-ip-172-31-27-65" Feb 13 19:01:44.116942 kubelet[3494]: I0213 19:01:44.116846 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce4edff067d340527e26a3eb29b0c19d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-65\" (UID: \"ce4edff067d340527e26a3eb29b0c19d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-65" Feb 13 19:01:44.117035 kubelet[3494]: I0213 19:01:44.116968 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce4edff067d340527e26a3eb29b0c19d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-65\" (UID: \"ce4edff067d340527e26a3eb29b0c19d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-65" Feb 13 19:01:44.117101 kubelet[3494]: I0213 19:01:44.117068 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55e3a529ccdbce46ddb72041b364e169-ca-certs\") pod \"kube-apiserver-ip-172-31-27-65\" (UID: \"55e3a529ccdbce46ddb72041b364e169\") " pod="kube-system/kube-apiserver-ip-172-31-27-65" Feb 13 19:01:44.117207 kubelet[3494]: I0213 19:01:44.117169 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55e3a529ccdbce46ddb72041b364e169-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-65\" (UID: \"55e3a529ccdbce46ddb72041b364e169\") " pod="kube-system/kube-apiserver-ip-172-31-27-65" Feb 13 19:01:44.117447 kubelet[3494]: I0213 19:01:44.117410 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce4edff067d340527e26a3eb29b0c19d-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-65\" (UID: \"ce4edff067d340527e26a3eb29b0c19d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-65" Feb 13 19:01:44.118471 kubelet[3494]: I0213 19:01:44.118144 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce4edff067d340527e26a3eb29b0c19d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-65\" (UID: \"ce4edff067d340527e26a3eb29b0c19d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-65" Feb 13 19:01:44.118471 kubelet[3494]: I0213 19:01:44.118228 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce4edff067d340527e26a3eb29b0c19d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-65\" (UID: \"ce4edff067d340527e26a3eb29b0c19d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-65" Feb 13 19:01:44.118471 kubelet[3494]: I0213 19:01:44.118287 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19149eb4bd42f11c39a6db30fb406dd6-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-65\" (UID: \"19149eb4bd42f11c39a6db30fb406dd6\") " pod="kube-system/kube-scheduler-ip-172-31-27-65" Feb 13 19:01:44.513446 sudo[3508]: pam_unix(sudo:session): session closed for user root Feb 13 19:01:44.563263 kubelet[3494]: I0213 19:01:44.563091 3494 apiserver.go:52] "Watching apiserver" Feb 13 19:01:44.611934 kubelet[3494]: I0213 19:01:44.611865 3494 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:01:44.878326 kubelet[3494]: I0213 19:01:44.877995 3494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-27-65" podStartSLOduration=0.877975466 podStartE2EDuration="877.975466ms" podCreationTimestamp="2025-02-13 19:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:01:44.877927442 +0000 UTC m=+1.458212901" watchObservedRunningTime="2025-02-13 19:01:44.877975466 +0000 UTC m=+1.458260937" Feb 13 19:01:44.919379 kubelet[3494]: I0213 19:01:44.918629 3494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-27-65" podStartSLOduration=0.918607682 podStartE2EDuration="918.607682ms" podCreationTimestamp="2025-02-13 19:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:01:44.898100606 +0000 UTC m=+1.478386221" watchObservedRunningTime="2025-02-13 19:01:44.918607682 +0000 UTC m=+1.498893153" Feb 13 19:01:44.941711 kubelet[3494]: I0213 19:01:44.941288 3494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-27-65" podStartSLOduration=0.941266994 podStartE2EDuration="941.266994ms" podCreationTimestamp="2025-02-13 19:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:01:44.920162978 +0000 UTC m=+1.500448473" watchObservedRunningTime="2025-02-13 19:01:44.941266994 +0000 UTC m=+1.521552477" Feb 13 19:01:47.115475 sudo[2314]: pam_unix(sudo:session): session closed for user root Feb 13 19:01:47.139065 sshd[2313]: Connection closed by 139.178.89.65 port 44838 Feb 13 19:01:47.139934 sshd-session[2311]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:47.146049 systemd[1]: sshd@8-172.31.27.65:22-139.178.89.65:44838.service: Deactivated successfully. Feb 13 19:01:47.150411 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:01:47.151152 systemd[1]: session-9.scope: Consumed 11.513s CPU time, 293.8M memory peak. Feb 13 19:01:47.155976 systemd-logind[1936]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:01:47.158022 systemd-logind[1936]: Removed session 9. Feb 13 19:01:56.168949 kubelet[3494]: I0213 19:01:56.168727 3494 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:01:56.170630 kubelet[3494]: I0213 19:01:56.170109 3494 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:01:56.170719 containerd[1951]: time="2025-02-13T19:01:56.169763110Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:01:56.829828 kubelet[3494]: I0213 19:01:56.829763 3494 topology_manager.go:215] "Topology Admit Handler" podUID="25cc1afa-ad89-4dcb-a5d9-df7a1071aa52" podNamespace="kube-system" podName="kube-proxy-zwjkh" Feb 13 19:01:56.851477 systemd[1]: Created slice kubepods-besteffort-pod25cc1afa_ad89_4dcb_a5d9_df7a1071aa52.slice - libcontainer container kubepods-besteffort-pod25cc1afa_ad89_4dcb_a5d9_df7a1071aa52.slice. Feb 13 19:01:56.880946 kubelet[3494]: I0213 19:01:56.880879 3494 topology_manager.go:215] "Topology Admit Handler" podUID="f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" podNamespace="kube-system" podName="cilium-hkt7d" Feb 13 19:01:56.895542 kubelet[3494]: I0213 19:01:56.895479 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-bpf-maps\") pod \"cilium-hkt7d\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " pod="kube-system/cilium-hkt7d" Feb 13 19:01:56.895774 kubelet[3494]: I0213 19:01:56.895551 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-cni-path\") pod \"cilium-hkt7d\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " pod="kube-system/cilium-hkt7d" Feb 13 19:01:56.895774 kubelet[3494]: I0213 19:01:56.895596 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-cilium-config-path\") pod \"cilium-hkt7d\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " pod="kube-system/cilium-hkt7d" Feb 13 19:01:56.895774 kubelet[3494]: I0213 19:01:56.895634 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/25cc1afa-ad89-4dcb-a5d9-df7a1071aa52-kube-proxy\") pod \"kube-proxy-zwjkh\" (UID: \"25cc1afa-ad89-4dcb-a5d9-df7a1071aa52\") " pod="kube-system/kube-proxy-zwjkh" Feb 13 19:01:56.895774 kubelet[3494]: I0213 19:01:56.895676 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-etc-cni-netd\") pod \"cilium-hkt7d\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " pod="kube-system/cilium-hkt7d" Feb 13 19:01:56.895774 kubelet[3494]: I0213 19:01:56.895713 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-host-proc-sys-kernel\") pod \"cilium-hkt7d\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " pod="kube-system/cilium-hkt7d" Feb 13 19:01:56.897549 kubelet[3494]: I0213 19:01:56.895748 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4bm5\" (UniqueName: \"kubernetes.io/projected/25cc1afa-ad89-4dcb-a5d9-df7a1071aa52-kube-api-access-x4bm5\") pod \"kube-proxy-zwjkh\" (UID: \"25cc1afa-ad89-4dcb-a5d9-df7a1071aa52\") " pod="kube-system/kube-proxy-zwjkh" Feb 13 19:01:56.897549 kubelet[3494]: I0213 19:01:56.895787 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-hostproc\") pod \"cilium-hkt7d\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " pod="kube-system/cilium-hkt7d" Feb 13 19:01:56.897549 kubelet[3494]: I0213 19:01:56.895822 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25cc1afa-ad89-4dcb-a5d9-df7a1071aa52-xtables-lock\") pod \"kube-proxy-zwjkh\" (UID: \"25cc1afa-ad89-4dcb-a5d9-df7a1071aa52\") " pod="kube-system/kube-proxy-zwjkh" Feb 13 19:01:56.897549 kubelet[3494]: I0213 19:01:56.895861 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-cilium-run\") pod \"cilium-hkt7d\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " pod="kube-system/cilium-hkt7d" Feb 13 19:01:56.897549 kubelet[3494]: I0213 19:01:56.895895 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-clustermesh-secrets\") pod \"cilium-hkt7d\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " pod="kube-system/cilium-hkt7d" Feb 13 19:01:56.897549 kubelet[3494]: I0213 19:01:56.895929 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25cc1afa-ad89-4dcb-a5d9-df7a1071aa52-lib-modules\") pod \"kube-proxy-zwjkh\" (UID: \"25cc1afa-ad89-4dcb-a5d9-df7a1071aa52\") " pod="kube-system/kube-proxy-zwjkh" Feb 13 19:01:56.897896 kubelet[3494]: I0213 19:01:56.895981 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-host-proc-sys-net\") pod \"cilium-hkt7d\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " pod="kube-system/cilium-hkt7d" Feb 13 19:01:56.897896 kubelet[3494]: I0213 19:01:56.896017 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24clc\" (UniqueName: \"kubernetes.io/projected/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-kube-api-access-24clc\") pod \"cilium-hkt7d\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " pod="kube-system/cilium-hkt7d" Feb 13 19:01:56.897896 kubelet[3494]: I0213 19:01:56.896052 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-cilium-cgroup\") pod \"cilium-hkt7d\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " pod="kube-system/cilium-hkt7d" Feb 13 19:01:56.897896 kubelet[3494]: I0213 19:01:56.896108 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-xtables-lock\") pod \"cilium-hkt7d\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " pod="kube-system/cilium-hkt7d" Feb 13 19:01:56.897896 kubelet[3494]: I0213 19:01:56.896541 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-lib-modules\") pod \"cilium-hkt7d\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " pod="kube-system/cilium-hkt7d" Feb 13 19:01:56.897896 kubelet[3494]: I0213 19:01:56.896645 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-hubble-tls\") pod \"cilium-hkt7d\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " pod="kube-system/cilium-hkt7d" Feb 13 19:01:56.901766 kubelet[3494]: W0213 19:01:56.901626 3494 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-27-65" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-27-65' and this object Feb 13 19:01:56.901766 kubelet[3494]: E0213 19:01:56.901687 3494 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-27-65" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-27-65' and this object Feb 13 19:01:56.901766 kubelet[3494]: W0213 19:01:56.901626 3494 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-27-65" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-27-65' and this object Feb 13 19:01:56.901766 kubelet[3494]: E0213 19:01:56.901728 3494 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-27-65" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-27-65' and this object Feb 13 19:01:56.901766 kubelet[3494]: W0213 19:01:56.901737 3494 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-27-65" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-27-65' and this object Feb 13 19:01:56.903736 kubelet[3494]: E0213 19:01:56.901770 3494 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-27-65" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-27-65' and this object Feb 13 19:01:56.907786 systemd[1]: Created slice kubepods-burstable-podf99e544e_94ba_4bfe_b934_33d1d8e3a5ac.slice - libcontainer container kubepods-burstable-podf99e544e_94ba_4bfe_b934_33d1d8e3a5ac.slice. Feb 13 19:01:56.980060 kubelet[3494]: I0213 19:01:56.979126 3494 topology_manager.go:215] "Topology Admit Handler" podUID="e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7" podNamespace="kube-system" podName="cilium-operator-599987898-659pc" Feb 13 19:01:56.997660 kubelet[3494]: I0213 19:01:56.997207 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q45wq\" (UniqueName: \"kubernetes.io/projected/e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7-kube-api-access-q45wq\") pod \"cilium-operator-599987898-659pc\" (UID: \"e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7\") " pod="kube-system/cilium-operator-599987898-659pc" Feb 13 19:01:56.997660 kubelet[3494]: I0213 19:01:56.997372 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7-cilium-config-path\") pod \"cilium-operator-599987898-659pc\" (UID: \"e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7\") " pod="kube-system/cilium-operator-599987898-659pc" Feb 13 19:01:56.999629 systemd[1]: Created slice kubepods-besteffort-pode3a88d9a_6590_4d1a_b4c6_6608e2bb92e7.slice - libcontainer container kubepods-besteffort-pode3a88d9a_6590_4d1a_b4c6_6608e2bb92e7.slice. Feb 13 19:01:57.164282 containerd[1951]: time="2025-02-13T19:01:57.164120423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zwjkh,Uid:25cc1afa-ad89-4dcb-a5d9-df7a1071aa52,Namespace:kube-system,Attempt:0,}" Feb 13 19:01:57.223759 containerd[1951]: time="2025-02-13T19:01:57.223278431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:01:57.223759 containerd[1951]: time="2025-02-13T19:01:57.223431443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:01:57.223759 containerd[1951]: time="2025-02-13T19:01:57.223469495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:57.223759 containerd[1951]: time="2025-02-13T19:01:57.223672223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:57.266624 systemd[1]: Started cri-containerd-21218801d3c05a1111a1a5be75c11fa006df22ff0f02e2ae89d4089514f28d2f.scope - libcontainer container 21218801d3c05a1111a1a5be75c11fa006df22ff0f02e2ae89d4089514f28d2f. Feb 13 19:01:57.309285 containerd[1951]: time="2025-02-13T19:01:57.309229031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zwjkh,Uid:25cc1afa-ad89-4dcb-a5d9-df7a1071aa52,Namespace:kube-system,Attempt:0,} returns sandbox id \"21218801d3c05a1111a1a5be75c11fa006df22ff0f02e2ae89d4089514f28d2f\"" Feb 13 19:01:57.315601 containerd[1951]: time="2025-02-13T19:01:57.315412091Z" level=info msg="CreateContainer within sandbox \"21218801d3c05a1111a1a5be75c11fa006df22ff0f02e2ae89d4089514f28d2f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:01:57.347702 containerd[1951]: time="2025-02-13T19:01:57.347559839Z" level=info msg="CreateContainer within sandbox \"21218801d3c05a1111a1a5be75c11fa006df22ff0f02e2ae89d4089514f28d2f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"504da7ad6151cfe67bbb0c2bbecdf6dd2c790a1148f88e0f655aad8243c37bc3\"" Feb 13 19:01:57.350144 containerd[1951]: time="2025-02-13T19:01:57.348622451Z" level=info msg="StartContainer for \"504da7ad6151cfe67bbb0c2bbecdf6dd2c790a1148f88e0f655aad8243c37bc3\"" Feb 13 19:01:57.394621 systemd[1]: Started cri-containerd-504da7ad6151cfe67bbb0c2bbecdf6dd2c790a1148f88e0f655aad8243c37bc3.scope - libcontainer container 504da7ad6151cfe67bbb0c2bbecdf6dd2c790a1148f88e0f655aad8243c37bc3. Feb 13 19:01:57.458337 containerd[1951]: time="2025-02-13T19:01:57.458179872Z" level=info msg="StartContainer for \"504da7ad6151cfe67bbb0c2bbecdf6dd2c790a1148f88e0f655aad8243c37bc3\" returns successfully" Feb 13 19:01:57.889836 kubelet[3494]: I0213 19:01:57.889747 3494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zwjkh" podStartSLOduration=1.88972619 podStartE2EDuration="1.88972619s" podCreationTimestamp="2025-02-13 19:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:01:57.889683602 +0000 UTC m=+14.469969061" watchObservedRunningTime="2025-02-13 19:01:57.88972619 +0000 UTC m=+14.470011649" Feb 13 19:01:57.999207 kubelet[3494]: E0213 19:01:57.998582 3494 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Feb 13 19:01:57.999207 kubelet[3494]: E0213 19:01:57.998700 3494 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-clustermesh-secrets podName:f99e544e-94ba-4bfe-b934-33d1d8e3a5ac nodeName:}" failed. No retries permitted until 2025-02-13 19:01:58.498669535 +0000 UTC m=+15.078954982 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-clustermesh-secrets") pod "cilium-hkt7d" (UID: "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac") : failed to sync secret cache: timed out waiting for the condition Feb 13 19:01:58.001855 kubelet[3494]: E0213 19:01:58.001746 3494 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 13 19:01:58.001855 kubelet[3494]: E0213 19:01:58.001787 3494 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-hkt7d: failed to sync secret cache: timed out waiting for the condition Feb 13 19:01:58.002077 kubelet[3494]: E0213 19:01:58.001890 3494 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-hubble-tls podName:f99e544e-94ba-4bfe-b934-33d1d8e3a5ac nodeName:}" failed. No retries permitted until 2025-02-13 19:01:58.501863371 +0000 UTC m=+15.082148866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-hubble-tls") pod "cilium-hkt7d" (UID: "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac") : failed to sync secret cache: timed out waiting for the condition Feb 13 19:01:58.210289 containerd[1951]: time="2025-02-13T19:01:58.209645172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-659pc,Uid:e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7,Namespace:kube-system,Attempt:0,}" Feb 13 19:01:58.260369 containerd[1951]: time="2025-02-13T19:01:58.259735728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:01:58.260369 containerd[1951]: time="2025-02-13T19:01:58.259834728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:01:58.260369 containerd[1951]: time="2025-02-13T19:01:58.259882548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:58.260369 containerd[1951]: time="2025-02-13T19:01:58.260046516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:58.305770 systemd[1]: Started cri-containerd-9fb463e00ff30d4566c5dd4883b0558e7b5fb1cf1dda71f99c3806a5d9194308.scope - libcontainer container 9fb463e00ff30d4566c5dd4883b0558e7b5fb1cf1dda71f99c3806a5d9194308. Feb 13 19:01:58.364579 containerd[1951]: time="2025-02-13T19:01:58.364417573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-659pc,Uid:e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fb463e00ff30d4566c5dd4883b0558e7b5fb1cf1dda71f99c3806a5d9194308\"" Feb 13 19:01:58.369837 containerd[1951]: time="2025-02-13T19:01:58.369764329Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:01:58.718228 containerd[1951]: time="2025-02-13T19:01:58.718134722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hkt7d,Uid:f99e544e-94ba-4bfe-b934-33d1d8e3a5ac,Namespace:kube-system,Attempt:0,}" Feb 13 19:01:58.762945 containerd[1951]: time="2025-02-13T19:01:58.762474458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:01:58.762945 containerd[1951]: time="2025-02-13T19:01:58.762578978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:01:58.762945 containerd[1951]: time="2025-02-13T19:01:58.762604502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:58.762945 containerd[1951]: time="2025-02-13T19:01:58.762753470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:58.796845 systemd[1]: Started cri-containerd-a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3.scope - libcontainer container a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3. Feb 13 19:01:58.842492 containerd[1951]: time="2025-02-13T19:01:58.842188515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hkt7d,Uid:f99e544e-94ba-4bfe-b934-33d1d8e3a5ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\"" Feb 13 19:02:00.027660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3834979920.mount: Deactivated successfully. Feb 13 19:02:00.754599 containerd[1951]: time="2025-02-13T19:02:00.754507864Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:00.756676 containerd[1951]: time="2025-02-13T19:02:00.756585388Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:02:00.759065 containerd[1951]: time="2025-02-13T19:02:00.758994220Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:00.762213 containerd[1951]: time="2025-02-13T19:02:00.761913424Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.392081499s" Feb 13 19:02:00.762213 containerd[1951]: time="2025-02-13T19:02:00.761968204Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:02:00.766245 containerd[1951]: time="2025-02-13T19:02:00.765989704Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:02:00.768023 containerd[1951]: time="2025-02-13T19:02:00.767829136Z" level=info msg="CreateContainer within sandbox \"9fb463e00ff30d4566c5dd4883b0558e7b5fb1cf1dda71f99c3806a5d9194308\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:02:00.801009 containerd[1951]: time="2025-02-13T19:02:00.800887349Z" level=info msg="CreateContainer within sandbox \"9fb463e00ff30d4566c5dd4883b0558e7b5fb1cf1dda71f99c3806a5d9194308\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"01662ecd54659eb3bf9acddc2a5829c2a32c6831c4032f5de02adabe3193419e\"" Feb 13 19:02:00.803035 containerd[1951]: time="2025-02-13T19:02:00.802623845Z" level=info msg="StartContainer for \"01662ecd54659eb3bf9acddc2a5829c2a32c6831c4032f5de02adabe3193419e\"" Feb 13 19:02:00.863632 systemd[1]: Started cri-containerd-01662ecd54659eb3bf9acddc2a5829c2a32c6831c4032f5de02adabe3193419e.scope - libcontainer container 01662ecd54659eb3bf9acddc2a5829c2a32c6831c4032f5de02adabe3193419e. Feb 13 19:02:00.917884 containerd[1951]: time="2025-02-13T19:02:00.917820101Z" level=info msg="StartContainer for \"01662ecd54659eb3bf9acddc2a5829c2a32c6831c4032f5de02adabe3193419e\" returns successfully" Feb 13 19:02:06.602048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount165690817.mount: Deactivated successfully. Feb 13 19:02:09.252084 containerd[1951]: time="2025-02-13T19:02:09.251985155Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:09.254253 containerd[1951]: time="2025-02-13T19:02:09.253819475Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:02:09.257342 containerd[1951]: time="2025-02-13T19:02:09.256357295Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:09.260276 containerd[1951]: time="2025-02-13T19:02:09.259795511Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.493741499s" Feb 13 19:02:09.260276 containerd[1951]: time="2025-02-13T19:02:09.259979027Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:02:09.265911 containerd[1951]: time="2025-02-13T19:02:09.265830527Z" level=info msg="CreateContainer within sandbox \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:02:09.290130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1004899494.mount: Deactivated successfully. Feb 13 19:02:09.297941 containerd[1951]: time="2025-02-13T19:02:09.297867647Z" level=info msg="CreateContainer within sandbox \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"700eb9f089a4f900e8d4cd6b8c275764448662bc6f9b3dbba7100f66037b97c4\"" Feb 13 19:02:09.298889 containerd[1951]: time="2025-02-13T19:02:09.298824059Z" level=info msg="StartContainer for \"700eb9f089a4f900e8d4cd6b8c275764448662bc6f9b3dbba7100f66037b97c4\"" Feb 13 19:02:09.353614 systemd[1]: Started cri-containerd-700eb9f089a4f900e8d4cd6b8c275764448662bc6f9b3dbba7100f66037b97c4.scope - libcontainer container 700eb9f089a4f900e8d4cd6b8c275764448662bc6f9b3dbba7100f66037b97c4. Feb 13 19:02:09.405866 containerd[1951]: time="2025-02-13T19:02:09.405795191Z" level=info msg="StartContainer for \"700eb9f089a4f900e8d4cd6b8c275764448662bc6f9b3dbba7100f66037b97c4\" returns successfully" Feb 13 19:02:09.428280 systemd[1]: cri-containerd-700eb9f089a4f900e8d4cd6b8c275764448662bc6f9b3dbba7100f66037b97c4.scope: Deactivated successfully. Feb 13 19:02:09.946867 kubelet[3494]: I0213 19:02:09.945148 3494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-659pc" podStartSLOduration=11.549279491 podStartE2EDuration="13.945125498s" podCreationTimestamp="2025-02-13 19:01:56 +0000 UTC" firstStartedPulling="2025-02-13 19:01:58.367979317 +0000 UTC m=+14.948264776" lastFinishedPulling="2025-02-13 19:02:00.763825228 +0000 UTC m=+17.344110783" observedRunningTime="2025-02-13 19:02:02.014539623 +0000 UTC m=+18.594825094" watchObservedRunningTime="2025-02-13 19:02:09.945125498 +0000 UTC m=+26.525410969" Feb 13 19:02:10.286261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-700eb9f089a4f900e8d4cd6b8c275764448662bc6f9b3dbba7100f66037b97c4-rootfs.mount: Deactivated successfully. Feb 13 19:02:10.356857 containerd[1951]: time="2025-02-13T19:02:10.356768232Z" level=info msg="shim disconnected" id=700eb9f089a4f900e8d4cd6b8c275764448662bc6f9b3dbba7100f66037b97c4 namespace=k8s.io Feb 13 19:02:10.356857 containerd[1951]: time="2025-02-13T19:02:10.356846268Z" level=warning msg="cleaning up after shim disconnected" id=700eb9f089a4f900e8d4cd6b8c275764448662bc6f9b3dbba7100f66037b97c4 namespace=k8s.io Feb 13 19:02:10.357901 containerd[1951]: time="2025-02-13T19:02:10.356868768Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:02:10.379541 containerd[1951]: time="2025-02-13T19:02:10.379378596Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:02:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:02:10.931575 containerd[1951]: time="2025-02-13T19:02:10.931500819Z" level=info msg="CreateContainer within sandbox \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:02:10.953404 containerd[1951]: time="2025-02-13T19:02:10.952609827Z" level=info msg="CreateContainer within sandbox \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"557f3b0f139d6fa367aaed5eca77dd7f6b35e2e7f8540d1ea5a80a7dea72c880\"" Feb 13 19:02:10.954605 containerd[1951]: time="2025-02-13T19:02:10.954544119Z" level=info msg="StartContainer for \"557f3b0f139d6fa367aaed5eca77dd7f6b35e2e7f8540d1ea5a80a7dea72c880\"" Feb 13 19:02:11.034664 systemd[1]: Started cri-containerd-557f3b0f139d6fa367aaed5eca77dd7f6b35e2e7f8540d1ea5a80a7dea72c880.scope - libcontainer container 557f3b0f139d6fa367aaed5eca77dd7f6b35e2e7f8540d1ea5a80a7dea72c880. Feb 13 19:02:11.085909 containerd[1951]: time="2025-02-13T19:02:11.085832928Z" level=info msg="StartContainer for \"557f3b0f139d6fa367aaed5eca77dd7f6b35e2e7f8540d1ea5a80a7dea72c880\" returns successfully" Feb 13 19:02:11.108979 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:02:11.109566 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:02:11.110659 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:02:11.118103 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:02:11.118622 systemd[1]: cri-containerd-557f3b0f139d6fa367aaed5eca77dd7f6b35e2e7f8540d1ea5a80a7dea72c880.scope: Deactivated successfully. Feb 13 19:02:11.167499 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:02:11.177190 containerd[1951]: time="2025-02-13T19:02:11.177109152Z" level=info msg="shim disconnected" id=557f3b0f139d6fa367aaed5eca77dd7f6b35e2e7f8540d1ea5a80a7dea72c880 namespace=k8s.io Feb 13 19:02:11.177190 containerd[1951]: time="2025-02-13T19:02:11.177182160Z" level=warning msg="cleaning up after shim disconnected" id=557f3b0f139d6fa367aaed5eca77dd7f6b35e2e7f8540d1ea5a80a7dea72c880 namespace=k8s.io Feb 13 19:02:11.177489 containerd[1951]: time="2025-02-13T19:02:11.177201876Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:02:11.287134 systemd[1]: run-containerd-runc-k8s.io-557f3b0f139d6fa367aaed5eca77dd7f6b35e2e7f8540d1ea5a80a7dea72c880-runc.Cl3Vom.mount: Deactivated successfully. Feb 13 19:02:11.287681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-557f3b0f139d6fa367aaed5eca77dd7f6b35e2e7f8540d1ea5a80a7dea72c880-rootfs.mount: Deactivated successfully. Feb 13 19:02:11.932457 containerd[1951]: time="2025-02-13T19:02:11.932239180Z" level=info msg="CreateContainer within sandbox \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:02:11.973147 containerd[1951]: time="2025-02-13T19:02:11.972584872Z" level=info msg="CreateContainer within sandbox \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"424186d9f8bf5016b378feae71fae4239febf5b49de57da5a34030ac50573820\"" Feb 13 19:02:11.974182 containerd[1951]: time="2025-02-13T19:02:11.974130496Z" level=info msg="StartContainer for \"424186d9f8bf5016b378feae71fae4239febf5b49de57da5a34030ac50573820\"" Feb 13 19:02:12.047610 systemd[1]: Started cri-containerd-424186d9f8bf5016b378feae71fae4239febf5b49de57da5a34030ac50573820.scope - libcontainer container 424186d9f8bf5016b378feae71fae4239febf5b49de57da5a34030ac50573820. Feb 13 19:02:12.106298 containerd[1951]: time="2025-02-13T19:02:12.105477769Z" level=info msg="StartContainer for \"424186d9f8bf5016b378feae71fae4239febf5b49de57da5a34030ac50573820\" returns successfully" Feb 13 19:02:12.112257 systemd[1]: cri-containerd-424186d9f8bf5016b378feae71fae4239febf5b49de57da5a34030ac50573820.scope: Deactivated successfully. Feb 13 19:02:12.158007 containerd[1951]: time="2025-02-13T19:02:12.157805905Z" level=info msg="shim disconnected" id=424186d9f8bf5016b378feae71fae4239febf5b49de57da5a34030ac50573820 namespace=k8s.io Feb 13 19:02:12.158502 containerd[1951]: time="2025-02-13T19:02:12.158437297Z" level=warning msg="cleaning up after shim disconnected" id=424186d9f8bf5016b378feae71fae4239febf5b49de57da5a34030ac50573820 namespace=k8s.io Feb 13 19:02:12.160406 containerd[1951]: time="2025-02-13T19:02:12.158471701Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:02:12.288372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-424186d9f8bf5016b378feae71fae4239febf5b49de57da5a34030ac50573820-rootfs.mount: Deactivated successfully. Feb 13 19:02:12.942159 containerd[1951]: time="2025-02-13T19:02:12.942093845Z" level=info msg="CreateContainer within sandbox \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:02:12.976288 containerd[1951]: time="2025-02-13T19:02:12.975366245Z" level=info msg="CreateContainer within sandbox \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cdc478773de7ca034c1bfd9b59db01ed0fc57fe8b616dadfcba2ae5db6a9a085\"" Feb 13 19:02:12.976288 containerd[1951]: time="2025-02-13T19:02:12.976208777Z" level=info msg="StartContainer for \"cdc478773de7ca034c1bfd9b59db01ed0fc57fe8b616dadfcba2ae5db6a9a085\"" Feb 13 19:02:13.033631 systemd[1]: Started cri-containerd-cdc478773de7ca034c1bfd9b59db01ed0fc57fe8b616dadfcba2ae5db6a9a085.scope - libcontainer container cdc478773de7ca034c1bfd9b59db01ed0fc57fe8b616dadfcba2ae5db6a9a085. Feb 13 19:02:13.076374 systemd[1]: cri-containerd-cdc478773de7ca034c1bfd9b59db01ed0fc57fe8b616dadfcba2ae5db6a9a085.scope: Deactivated successfully. Feb 13 19:02:13.081707 containerd[1951]: time="2025-02-13T19:02:13.081495218Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf99e544e_94ba_4bfe_b934_33d1d8e3a5ac.slice/cri-containerd-cdc478773de7ca034c1bfd9b59db01ed0fc57fe8b616dadfcba2ae5db6a9a085.scope/memory.events\": no such file or directory" Feb 13 19:02:13.084054 containerd[1951]: time="2025-02-13T19:02:13.083984654Z" level=info msg="StartContainer for \"cdc478773de7ca034c1bfd9b59db01ed0fc57fe8b616dadfcba2ae5db6a9a085\" returns successfully" Feb 13 19:02:13.121587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdc478773de7ca034c1bfd9b59db01ed0fc57fe8b616dadfcba2ae5db6a9a085-rootfs.mount: Deactivated successfully. Feb 13 19:02:13.129738 containerd[1951]: time="2025-02-13T19:02:13.129578270Z" level=info msg="shim disconnected" id=cdc478773de7ca034c1bfd9b59db01ed0fc57fe8b616dadfcba2ae5db6a9a085 namespace=k8s.io Feb 13 19:02:13.129738 containerd[1951]: time="2025-02-13T19:02:13.129708194Z" level=warning msg="cleaning up after shim disconnected" id=cdc478773de7ca034c1bfd9b59db01ed0fc57fe8b616dadfcba2ae5db6a9a085 namespace=k8s.io Feb 13 19:02:13.129738 containerd[1951]: time="2025-02-13T19:02:13.129730034Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:02:13.948172 containerd[1951]: time="2025-02-13T19:02:13.948113550Z" level=info msg="CreateContainer within sandbox \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:02:13.990693 containerd[1951]: time="2025-02-13T19:02:13.990618870Z" level=info msg="CreateContainer within sandbox \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ce8dcecc175c91e61779969f38525d0c642c808b811892da73a24b9da494b073\"" Feb 13 19:02:13.992020 containerd[1951]: time="2025-02-13T19:02:13.991938030Z" level=info msg="StartContainer for \"ce8dcecc175c91e61779969f38525d0c642c808b811892da73a24b9da494b073\"" Feb 13 19:02:14.054667 systemd[1]: Started cri-containerd-ce8dcecc175c91e61779969f38525d0c642c808b811892da73a24b9da494b073.scope - libcontainer container ce8dcecc175c91e61779969f38525d0c642c808b811892da73a24b9da494b073. Feb 13 19:02:14.121820 containerd[1951]: time="2025-02-13T19:02:14.121746999Z" level=info msg="StartContainer for \"ce8dcecc175c91e61779969f38525d0c642c808b811892da73a24b9da494b073\" returns successfully" Feb 13 19:02:14.297817 kubelet[3494]: I0213 19:02:14.297772 3494 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:02:14.353370 kubelet[3494]: I0213 19:02:14.353034 3494 topology_manager.go:215] "Topology Admit Handler" podUID="eb23987c-8d7c-42bd-ba77-595e95919851" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4586z" Feb 13 19:02:14.357332 kubelet[3494]: I0213 19:02:14.357205 3494 topology_manager.go:215] "Topology Admit Handler" podUID="2090b4cc-8e74-4144-9fb3-15a6c82caa25" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dzpbh" Feb 13 19:02:14.372743 systemd[1]: Created slice kubepods-burstable-podeb23987c_8d7c_42bd_ba77_595e95919851.slice - libcontainer container kubepods-burstable-podeb23987c_8d7c_42bd_ba77_595e95919851.slice. Feb 13 19:02:14.396555 systemd[1]: Created slice kubepods-burstable-pod2090b4cc_8e74_4144_9fb3_15a6c82caa25.slice - libcontainer container kubepods-burstable-pod2090b4cc_8e74_4144_9fb3_15a6c82caa25.slice. Feb 13 19:02:14.518787 kubelet[3494]: I0213 19:02:14.518693 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6vjq\" (UniqueName: \"kubernetes.io/projected/2090b4cc-8e74-4144-9fb3-15a6c82caa25-kube-api-access-l6vjq\") pod \"coredns-7db6d8ff4d-dzpbh\" (UID: \"2090b4cc-8e74-4144-9fb3-15a6c82caa25\") " pod="kube-system/coredns-7db6d8ff4d-dzpbh" Feb 13 19:02:14.519284 kubelet[3494]: I0213 19:02:14.518915 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb23987c-8d7c-42bd-ba77-595e95919851-config-volume\") pod \"coredns-7db6d8ff4d-4586z\" (UID: \"eb23987c-8d7c-42bd-ba77-595e95919851\") " pod="kube-system/coredns-7db6d8ff4d-4586z" Feb 13 19:02:14.519284 kubelet[3494]: I0213 19:02:14.519162 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlswk\" (UniqueName: \"kubernetes.io/projected/eb23987c-8d7c-42bd-ba77-595e95919851-kube-api-access-nlswk\") pod \"coredns-7db6d8ff4d-4586z\" (UID: \"eb23987c-8d7c-42bd-ba77-595e95919851\") " pod="kube-system/coredns-7db6d8ff4d-4586z" Feb 13 19:02:14.519634 kubelet[3494]: I0213 19:02:14.519540 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2090b4cc-8e74-4144-9fb3-15a6c82caa25-config-volume\") pod \"coredns-7db6d8ff4d-dzpbh\" (UID: \"2090b4cc-8e74-4144-9fb3-15a6c82caa25\") " pod="kube-system/coredns-7db6d8ff4d-dzpbh" Feb 13 19:02:14.685836 containerd[1951]: time="2025-02-13T19:02:14.685120626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4586z,Uid:eb23987c-8d7c-42bd-ba77-595e95919851,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:14.707350 containerd[1951]: time="2025-02-13T19:02:14.707194710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dzpbh,Uid:2090b4cc-8e74-4144-9fb3-15a6c82caa25,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:17.110857 (udev-worker)[4276]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:02:17.110915 (udev-worker)[4278]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:02:17.112122 systemd-networkd[1865]: cilium_host: Link UP Feb 13 19:02:17.112520 systemd-networkd[1865]: cilium_net: Link UP Feb 13 19:02:17.112886 systemd-networkd[1865]: cilium_net: Gained carrier Feb 13 19:02:17.115714 systemd-networkd[1865]: cilium_host: Gained carrier Feb 13 19:02:17.277338 (udev-worker)[4329]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:02:17.288163 systemd-networkd[1865]: cilium_vxlan: Link UP Feb 13 19:02:17.288182 systemd-networkd[1865]: cilium_vxlan: Gained carrier Feb 13 19:02:17.423992 systemd-networkd[1865]: cilium_net: Gained IPv6LL Feb 13 19:02:17.766771 kernel: NET: Registered PF_ALG protocol family Feb 13 19:02:17.935549 systemd-networkd[1865]: cilium_host: Gained IPv6LL Feb 13 19:02:18.959701 systemd-networkd[1865]: cilium_vxlan: Gained IPv6LL Feb 13 19:02:19.070824 systemd-networkd[1865]: lxc_health: Link UP Feb 13 19:02:19.078713 (udev-worker)[4327]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:02:19.085941 systemd-networkd[1865]: lxc_health: Gained carrier Feb 13 19:02:19.368478 systemd-networkd[1865]: lxc91529d5d20b7: Link UP Feb 13 19:02:19.374487 kernel: eth0: renamed from tmp6f062 Feb 13 19:02:19.379712 systemd-networkd[1865]: lxc91529d5d20b7: Gained carrier Feb 13 19:02:19.852336 systemd-networkd[1865]: lxc9d791fe679ff: Link UP Feb 13 19:02:19.857365 kernel: eth0: renamed from tmp23d05 Feb 13 19:02:19.865474 systemd-networkd[1865]: lxc9d791fe679ff: Gained carrier Feb 13 19:02:20.625032 systemd-networkd[1865]: lxc_health: Gained IPv6LL Feb 13 19:02:20.754731 kubelet[3494]: I0213 19:02:20.754628 3494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hkt7d" podStartSLOduration=14.337887384 podStartE2EDuration="24.754605348s" podCreationTimestamp="2025-02-13 19:01:56 +0000 UTC" firstStartedPulling="2025-02-13 19:01:58.844994535 +0000 UTC m=+15.425279994" lastFinishedPulling="2025-02-13 19:02:09.261712499 +0000 UTC m=+25.841997958" observedRunningTime="2025-02-13 19:02:15.033783171 +0000 UTC m=+31.614068630" watchObservedRunningTime="2025-02-13 19:02:20.754605348 +0000 UTC m=+37.334890819" Feb 13 19:02:21.200090 systemd-networkd[1865]: lxc9d791fe679ff: Gained IPv6LL Feb 13 19:02:21.391732 systemd-networkd[1865]: lxc91529d5d20b7: Gained IPv6LL Feb 13 19:02:22.928856 systemd[1]: Started sshd@9-172.31.27.65:22-139.178.89.65:51468.service - OpenSSH per-connection server daemon (139.178.89.65:51468). Feb 13 19:02:23.125137 sshd[4681]: Accepted publickey for core from 139.178.89.65 port 51468 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:23.126472 sshd-session[4681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:23.136351 systemd-logind[1936]: New session 10 of user core. Feb 13 19:02:23.144799 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:02:23.468779 sshd[4683]: Connection closed by 139.178.89.65 port 51468 Feb 13 19:02:23.470755 sshd-session[4681]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:23.477634 systemd[1]: sshd@9-172.31.27.65:22-139.178.89.65:51468.service: Deactivated successfully. Feb 13 19:02:23.483413 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:02:23.488868 systemd-logind[1936]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:02:23.494183 systemd-logind[1936]: Removed session 10. Feb 13 19:02:23.570031 ntpd[1928]: Listen normally on 7 cilium_host 192.168.0.121:123 Feb 13 19:02:23.572515 ntpd[1928]: 13 Feb 19:02:23 ntpd[1928]: Listen normally on 7 cilium_host 192.168.0.121:123 Feb 13 19:02:23.572515 ntpd[1928]: 13 Feb 19:02:23 ntpd[1928]: Listen normally on 8 cilium_net [fe80::429:a5ff:feab:b43c%4]:123 Feb 13 19:02:23.572515 ntpd[1928]: 13 Feb 19:02:23 ntpd[1928]: Listen normally on 9 cilium_host [fe80::45c:9aff:fe7f:8e50%5]:123 Feb 13 19:02:23.572515 ntpd[1928]: 13 Feb 19:02:23 ntpd[1928]: Listen normally on 10 cilium_vxlan [fe80::bcdd:48ff:fece:cefe%6]:123 Feb 13 19:02:23.572515 ntpd[1928]: 13 Feb 19:02:23 ntpd[1928]: Listen normally on 11 lxc_health [fe80::f807:bbff:fe9c:ddbd%8]:123 Feb 13 19:02:23.572515 ntpd[1928]: 13 Feb 19:02:23 ntpd[1928]: Listen normally on 12 lxc91529d5d20b7 [fe80::cbb:92ff:fefc:5c26%10]:123 Feb 13 19:02:23.572515 ntpd[1928]: 13 Feb 19:02:23 ntpd[1928]: Listen normally on 13 lxc9d791fe679ff [fe80::685e:12ff:fe26:988%12]:123 Feb 13 19:02:23.570151 ntpd[1928]: Listen normally on 8 cilium_net [fe80::429:a5ff:feab:b43c%4]:123 Feb 13 19:02:23.570230 ntpd[1928]: Listen normally on 9 cilium_host [fe80::45c:9aff:fe7f:8e50%5]:123 Feb 13 19:02:23.570297 ntpd[1928]: Listen normally on 10 cilium_vxlan [fe80::bcdd:48ff:fece:cefe%6]:123 Feb 13 19:02:23.570390 ntpd[1928]: Listen normally on 11 lxc_health [fe80::f807:bbff:fe9c:ddbd%8]:123 Feb 13 19:02:23.570456 ntpd[1928]: Listen normally on 12 lxc91529d5d20b7 [fe80::cbb:92ff:fefc:5c26%10]:123 Feb 13 19:02:23.570521 ntpd[1928]: Listen normally on 13 lxc9d791fe679ff [fe80::685e:12ff:fe26:988%12]:123 Feb 13 19:02:27.995099 containerd[1951]: time="2025-02-13T19:02:27.994790912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:27.995099 containerd[1951]: time="2025-02-13T19:02:27.994898552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:27.995099 containerd[1951]: time="2025-02-13T19:02:27.994952624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:27.998430 containerd[1951]: time="2025-02-13T19:02:27.997582880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:28.061894 systemd[1]: Started cri-containerd-23d056818811d531ad98f867516f7b24628c367aa5eefcab494a8b6e36946747.scope - libcontainer container 23d056818811d531ad98f867516f7b24628c367aa5eefcab494a8b6e36946747. Feb 13 19:02:28.101188 containerd[1951]: time="2025-02-13T19:02:28.099015244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:28.101188 containerd[1951]: time="2025-02-13T19:02:28.099398224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:28.101188 containerd[1951]: time="2025-02-13T19:02:28.099468496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:28.101188 containerd[1951]: time="2025-02-13T19:02:28.099646300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:28.158008 systemd[1]: run-containerd-runc-k8s.io-6f06230f958fd3dfaa2d6e66695f6ce5534e1f20dcd02cc3dd99bcfa0763910f-runc.ethnEj.mount: Deactivated successfully. Feb 13 19:02:28.176645 systemd[1]: Started cri-containerd-6f06230f958fd3dfaa2d6e66695f6ce5534e1f20dcd02cc3dd99bcfa0763910f.scope - libcontainer container 6f06230f958fd3dfaa2d6e66695f6ce5534e1f20dcd02cc3dd99bcfa0763910f. Feb 13 19:02:28.217947 containerd[1951]: time="2025-02-13T19:02:28.217879265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4586z,Uid:eb23987c-8d7c-42bd-ba77-595e95919851,Namespace:kube-system,Attempt:0,} returns sandbox id \"23d056818811d531ad98f867516f7b24628c367aa5eefcab494a8b6e36946747\"" Feb 13 19:02:28.226735 containerd[1951]: time="2025-02-13T19:02:28.226672745Z" level=info msg="CreateContainer within sandbox \"23d056818811d531ad98f867516f7b24628c367aa5eefcab494a8b6e36946747\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:02:28.268504 containerd[1951]: time="2025-02-13T19:02:28.266751353Z" level=info msg="CreateContainer within sandbox \"23d056818811d531ad98f867516f7b24628c367aa5eefcab494a8b6e36946747\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"717465f264312d3ab966c803b2366c59b8a799bfa3600f97667bd3ed763ef1b9\"" Feb 13 19:02:28.269571 containerd[1951]: time="2025-02-13T19:02:28.269498609Z" level=info msg="StartContainer for \"717465f264312d3ab966c803b2366c59b8a799bfa3600f97667bd3ed763ef1b9\"" Feb 13 19:02:28.324348 containerd[1951]: time="2025-02-13T19:02:28.324221429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dzpbh,Uid:2090b4cc-8e74-4144-9fb3-15a6c82caa25,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f06230f958fd3dfaa2d6e66695f6ce5534e1f20dcd02cc3dd99bcfa0763910f\"" Feb 13 19:02:28.337969 containerd[1951]: time="2025-02-13T19:02:28.337893185Z" level=info msg="CreateContainer within sandbox \"6f06230f958fd3dfaa2d6e66695f6ce5534e1f20dcd02cc3dd99bcfa0763910f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:02:28.364070 systemd[1]: Started cri-containerd-717465f264312d3ab966c803b2366c59b8a799bfa3600f97667bd3ed763ef1b9.scope - libcontainer container 717465f264312d3ab966c803b2366c59b8a799bfa3600f97667bd3ed763ef1b9. Feb 13 19:02:28.385743 containerd[1951]: time="2025-02-13T19:02:28.385652610Z" level=info msg="CreateContainer within sandbox \"6f06230f958fd3dfaa2d6e66695f6ce5534e1f20dcd02cc3dd99bcfa0763910f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"305e19dbc470099f331d8f6b4a721215e2ea655677da52663f1eb72de7126178\"" Feb 13 19:02:28.387573 containerd[1951]: time="2025-02-13T19:02:28.386881626Z" level=info msg="StartContainer for \"305e19dbc470099f331d8f6b4a721215e2ea655677da52663f1eb72de7126178\"" Feb 13 19:02:28.462143 containerd[1951]: time="2025-02-13T19:02:28.462054546Z" level=info msg="StartContainer for \"717465f264312d3ab966c803b2366c59b8a799bfa3600f97667bd3ed763ef1b9\" returns successfully" Feb 13 19:02:28.489720 systemd[1]: Started cri-containerd-305e19dbc470099f331d8f6b4a721215e2ea655677da52663f1eb72de7126178.scope - libcontainer container 305e19dbc470099f331d8f6b4a721215e2ea655677da52663f1eb72de7126178. Feb 13 19:02:28.523996 systemd[1]: Started sshd@10-172.31.27.65:22-139.178.89.65:50968.service - OpenSSH per-connection server daemon (139.178.89.65:50968). Feb 13 19:02:28.641959 containerd[1951]: time="2025-02-13T19:02:28.641867971Z" level=info msg="StartContainer for \"305e19dbc470099f331d8f6b4a721215e2ea655677da52663f1eb72de7126178\" returns successfully" Feb 13 19:02:28.771356 sshd[4852]: Accepted publickey for core from 139.178.89.65 port 50968 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:28.775065 sshd-session[4852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:28.787900 systemd-logind[1936]: New session 11 of user core. Feb 13 19:02:28.800596 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:02:29.052142 kubelet[3494]: I0213 19:02:29.051917 3494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dzpbh" podStartSLOduration=33.051872705 podStartE2EDuration="33.051872705s" podCreationTimestamp="2025-02-13 19:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:02:29.049970381 +0000 UTC m=+45.630255852" watchObservedRunningTime="2025-02-13 19:02:29.051872705 +0000 UTC m=+45.632158164" Feb 13 19:02:29.071412 sshd[4876]: Connection closed by 139.178.89.65 port 50968 Feb 13 19:02:29.072269 sshd-session[4852]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:29.083691 systemd[1]: sshd@10-172.31.27.65:22-139.178.89.65:50968.service: Deactivated successfully. Feb 13 19:02:29.092148 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:02:29.098161 systemd-logind[1936]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:02:29.102559 systemd-logind[1936]: Removed session 11. Feb 13 19:02:29.125049 kubelet[3494]: I0213 19:02:29.124415 3494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4586z" podStartSLOduration=33.124390493 podStartE2EDuration="33.124390493s" podCreationTimestamp="2025-02-13 19:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:02:29.123393221 +0000 UTC m=+45.703678752" watchObservedRunningTime="2025-02-13 19:02:29.124390493 +0000 UTC m=+45.704675988" Feb 13 19:02:34.117820 systemd[1]: Started sshd@11-172.31.27.65:22-139.178.89.65:50970.service - OpenSSH per-connection server daemon (139.178.89.65:50970). Feb 13 19:02:34.316244 sshd[4896]: Accepted publickey for core from 139.178.89.65 port 50970 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:34.318763 sshd-session[4896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:34.328011 systemd-logind[1936]: New session 12 of user core. Feb 13 19:02:34.334597 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:02:34.575478 sshd[4899]: Connection closed by 139.178.89.65 port 50970 Feb 13 19:02:34.575980 sshd-session[4896]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:34.583102 systemd[1]: sshd@11-172.31.27.65:22-139.178.89.65:50970.service: Deactivated successfully. Feb 13 19:02:34.588214 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:02:34.589769 systemd-logind[1936]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:02:34.591826 systemd-logind[1936]: Removed session 12. Feb 13 19:02:39.618852 systemd[1]: Started sshd@12-172.31.27.65:22-139.178.89.65:42584.service - OpenSSH per-connection server daemon (139.178.89.65:42584). Feb 13 19:02:39.808001 sshd[4914]: Accepted publickey for core from 139.178.89.65 port 42584 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:39.810677 sshd-session[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:39.821174 systemd-logind[1936]: New session 13 of user core. Feb 13 19:02:39.831640 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:02:40.093977 sshd[4916]: Connection closed by 139.178.89.65 port 42584 Feb 13 19:02:40.094890 sshd-session[4914]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:40.100774 systemd[1]: sshd@12-172.31.27.65:22-139.178.89.65:42584.service: Deactivated successfully. Feb 13 19:02:40.105741 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:02:40.107417 systemd-logind[1936]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:02:40.109703 systemd-logind[1936]: Removed session 13. Feb 13 19:02:45.142841 systemd[1]: Started sshd@13-172.31.27.65:22-139.178.89.65:59362.service - OpenSSH per-connection server daemon (139.178.89.65:59362). Feb 13 19:02:45.328170 sshd[4931]: Accepted publickey for core from 139.178.89.65 port 59362 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:45.330772 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:45.338671 systemd-logind[1936]: New session 14 of user core. Feb 13 19:02:45.346584 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:02:45.593694 sshd[4933]: Connection closed by 139.178.89.65 port 59362 Feb 13 19:02:45.594445 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:45.602898 systemd[1]: sshd@13-172.31.27.65:22-139.178.89.65:59362.service: Deactivated successfully. Feb 13 19:02:45.608023 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:02:45.611625 systemd-logind[1936]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:02:45.613712 systemd-logind[1936]: Removed session 14. Feb 13 19:02:50.635872 systemd[1]: Started sshd@14-172.31.27.65:22-139.178.89.65:59366.service - OpenSSH per-connection server daemon (139.178.89.65:59366). Feb 13 19:02:50.825632 sshd[4946]: Accepted publickey for core from 139.178.89.65 port 59366 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:50.828125 sshd-session[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:50.837714 systemd-logind[1936]: New session 15 of user core. Feb 13 19:02:50.842631 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:02:51.086466 sshd[4948]: Connection closed by 139.178.89.65 port 59366 Feb 13 19:02:51.087689 sshd-session[4946]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:51.095070 systemd[1]: sshd@14-172.31.27.65:22-139.178.89.65:59366.service: Deactivated successfully. Feb 13 19:02:51.098978 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:02:51.101416 systemd-logind[1936]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:02:51.103601 systemd-logind[1936]: Removed session 15. Feb 13 19:02:51.124870 systemd[1]: Started sshd@15-172.31.27.65:22-139.178.89.65:59372.service - OpenSSH per-connection server daemon (139.178.89.65:59372). Feb 13 19:02:51.321574 sshd[4961]: Accepted publickey for core from 139.178.89.65 port 59372 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:51.324641 sshd-session[4961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:51.333399 systemd-logind[1936]: New session 16 of user core. Feb 13 19:02:51.340572 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:02:51.669702 sshd[4963]: Connection closed by 139.178.89.65 port 59372 Feb 13 19:02:51.671772 sshd-session[4961]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:51.679693 systemd-logind[1936]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:02:51.681011 systemd[1]: sshd@15-172.31.27.65:22-139.178.89.65:59372.service: Deactivated successfully. Feb 13 19:02:51.687643 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:02:51.694672 systemd-logind[1936]: Removed session 16. Feb 13 19:02:51.729932 systemd[1]: Started sshd@16-172.31.27.65:22-139.178.89.65:59384.service - OpenSSH per-connection server daemon (139.178.89.65:59384). Feb 13 19:02:51.915042 sshd[4974]: Accepted publickey for core from 139.178.89.65 port 59384 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:51.918188 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:51.927694 systemd-logind[1936]: New session 17 of user core. Feb 13 19:02:51.936665 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:02:52.183926 sshd[4976]: Connection closed by 139.178.89.65 port 59384 Feb 13 19:02:52.185086 sshd-session[4974]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:52.192250 systemd-logind[1936]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:02:52.193576 systemd[1]: sshd@16-172.31.27.65:22-139.178.89.65:59384.service: Deactivated successfully. Feb 13 19:02:52.197007 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:02:52.201431 systemd-logind[1936]: Removed session 17. Feb 13 19:02:57.234252 systemd[1]: Started sshd@17-172.31.27.65:22-139.178.89.65:56970.service - OpenSSH per-connection server daemon (139.178.89.65:56970). Feb 13 19:02:57.417422 sshd[4989]: Accepted publickey for core from 139.178.89.65 port 56970 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:57.420079 sshd-session[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:57.430113 systemd-logind[1936]: New session 18 of user core. Feb 13 19:02:57.437611 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:02:57.684024 sshd[4993]: Connection closed by 139.178.89.65 port 56970 Feb 13 19:02:57.685153 sshd-session[4989]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:57.692344 systemd[1]: sshd@17-172.31.27.65:22-139.178.89.65:56970.service: Deactivated successfully. Feb 13 19:02:57.696989 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:02:57.698969 systemd-logind[1936]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:02:57.701089 systemd-logind[1936]: Removed session 18. Feb 13 19:03:02.731852 systemd[1]: Started sshd@18-172.31.27.65:22-139.178.89.65:56982.service - OpenSSH per-connection server daemon (139.178.89.65:56982). Feb 13 19:03:02.918347 sshd[5008]: Accepted publickey for core from 139.178.89.65 port 56982 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:02.921033 sshd-session[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:02.930650 systemd-logind[1936]: New session 19 of user core. Feb 13 19:03:02.937601 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:03:03.189859 sshd[5010]: Connection closed by 139.178.89.65 port 56982 Feb 13 19:03:03.190802 sshd-session[5008]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:03.196267 systemd-logind[1936]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:03:03.196703 systemd[1]: sshd@18-172.31.27.65:22-139.178.89.65:56982.service: Deactivated successfully. Feb 13 19:03:03.200987 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:03:03.205910 systemd-logind[1936]: Removed session 19. Feb 13 19:03:08.233859 systemd[1]: Started sshd@19-172.31.27.65:22-139.178.89.65:43108.service - OpenSSH per-connection server daemon (139.178.89.65:43108). Feb 13 19:03:08.427686 sshd[5023]: Accepted publickey for core from 139.178.89.65 port 43108 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:08.430166 sshd-session[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:08.439335 systemd-logind[1936]: New session 20 of user core. Feb 13 19:03:08.445583 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:03:08.697235 sshd[5025]: Connection closed by 139.178.89.65 port 43108 Feb 13 19:03:08.698439 sshd-session[5023]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:08.704992 systemd[1]: sshd@19-172.31.27.65:22-139.178.89.65:43108.service: Deactivated successfully. Feb 13 19:03:08.710186 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:03:08.714107 systemd-logind[1936]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:03:08.716659 systemd-logind[1936]: Removed session 20. Feb 13 19:03:08.739850 systemd[1]: Started sshd@20-172.31.27.65:22-139.178.89.65:43118.service - OpenSSH per-connection server daemon (139.178.89.65:43118). Feb 13 19:03:08.930584 sshd[5037]: Accepted publickey for core from 139.178.89.65 port 43118 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:08.933220 sshd-session[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:08.943713 systemd-logind[1936]: New session 21 of user core. Feb 13 19:03:08.952660 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:03:09.253746 sshd[5039]: Connection closed by 139.178.89.65 port 43118 Feb 13 19:03:09.254858 sshd-session[5037]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:09.261074 systemd[1]: sshd@20-172.31.27.65:22-139.178.89.65:43118.service: Deactivated successfully. Feb 13 19:03:09.265512 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:03:09.269344 systemd-logind[1936]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:03:09.271426 systemd-logind[1936]: Removed session 21. Feb 13 19:03:09.298792 systemd[1]: Started sshd@21-172.31.27.65:22-139.178.89.65:43132.service - OpenSSH per-connection server daemon (139.178.89.65:43132). Feb 13 19:03:09.476629 sshd[5049]: Accepted publickey for core from 139.178.89.65 port 43132 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:09.479207 sshd-session[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:09.488038 systemd-logind[1936]: New session 22 of user core. Feb 13 19:03:09.494656 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:03:12.384161 sshd[5051]: Connection closed by 139.178.89.65 port 43132 Feb 13 19:03:12.384948 sshd-session[5049]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:12.396536 systemd-logind[1936]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:03:12.398576 systemd[1]: sshd@21-172.31.27.65:22-139.178.89.65:43132.service: Deactivated successfully. Feb 13 19:03:12.408148 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:03:12.409518 systemd[1]: session-22.scope: Consumed 878ms CPU time, 65.1M memory peak. Feb 13 19:03:12.430789 systemd-logind[1936]: Removed session 22. Feb 13 19:03:12.441000 systemd[1]: Started sshd@22-172.31.27.65:22-139.178.89.65:43140.service - OpenSSH per-connection server daemon (139.178.89.65:43140). Feb 13 19:03:12.636646 sshd[5067]: Accepted publickey for core from 139.178.89.65 port 43140 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:12.639365 sshd-session[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:12.649383 systemd-logind[1936]: New session 23 of user core. Feb 13 19:03:12.655614 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:03:13.145971 sshd[5070]: Connection closed by 139.178.89.65 port 43140 Feb 13 19:03:13.146871 sshd-session[5067]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:13.154818 systemd[1]: sshd@22-172.31.27.65:22-139.178.89.65:43140.service: Deactivated successfully. Feb 13 19:03:13.158736 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:03:13.163389 systemd-logind[1936]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:03:13.165755 systemd-logind[1936]: Removed session 23. Feb 13 19:03:13.189938 systemd[1]: Started sshd@23-172.31.27.65:22-139.178.89.65:43142.service - OpenSSH per-connection server daemon (139.178.89.65:43142). Feb 13 19:03:13.381996 sshd[5080]: Accepted publickey for core from 139.178.89.65 port 43142 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:13.386114 sshd-session[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:13.394502 systemd-logind[1936]: New session 24 of user core. Feb 13 19:03:13.401594 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:03:13.643177 sshd[5082]: Connection closed by 139.178.89.65 port 43142 Feb 13 19:03:13.644120 sshd-session[5080]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:13.650986 systemd[1]: sshd@23-172.31.27.65:22-139.178.89.65:43142.service: Deactivated successfully. Feb 13 19:03:13.656125 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:03:13.658353 systemd-logind[1936]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:03:13.660288 systemd-logind[1936]: Removed session 24. Feb 13 19:03:18.690862 systemd[1]: Started sshd@24-172.31.27.65:22-139.178.89.65:59048.service - OpenSSH per-connection server daemon (139.178.89.65:59048). Feb 13 19:03:18.892290 sshd[5093]: Accepted publickey for core from 139.178.89.65 port 59048 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:18.894471 sshd-session[5093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:18.902204 systemd-logind[1936]: New session 25 of user core. Feb 13 19:03:18.912592 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:03:19.153382 sshd[5095]: Connection closed by 139.178.89.65 port 59048 Feb 13 19:03:19.154293 sshd-session[5093]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:19.162359 systemd[1]: sshd@24-172.31.27.65:22-139.178.89.65:59048.service: Deactivated successfully. Feb 13 19:03:19.167086 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:03:19.171288 systemd-logind[1936]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:03:19.174539 systemd-logind[1936]: Removed session 25. Feb 13 19:03:24.191845 systemd[1]: Started sshd@25-172.31.27.65:22-139.178.89.65:59064.service - OpenSSH per-connection server daemon (139.178.89.65:59064). Feb 13 19:03:24.381078 sshd[5109]: Accepted publickey for core from 139.178.89.65 port 59064 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:24.383609 sshd-session[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:24.392151 systemd-logind[1936]: New session 26 of user core. Feb 13 19:03:24.400611 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:03:24.642241 sshd[5111]: Connection closed by 139.178.89.65 port 59064 Feb 13 19:03:24.643754 sshd-session[5109]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:24.649233 systemd-logind[1936]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:03:24.651487 systemd[1]: sshd@25-172.31.27.65:22-139.178.89.65:59064.service: Deactivated successfully. Feb 13 19:03:24.656407 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:03:24.660214 systemd-logind[1936]: Removed session 26. Feb 13 19:03:29.685869 systemd[1]: Started sshd@26-172.31.27.65:22-139.178.89.65:35870.service - OpenSSH per-connection server daemon (139.178.89.65:35870). Feb 13 19:03:29.884452 sshd[5125]: Accepted publickey for core from 139.178.89.65 port 35870 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:29.886901 sshd-session[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:29.897120 systemd-logind[1936]: New session 27 of user core. Feb 13 19:03:29.901588 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:03:30.142937 sshd[5127]: Connection closed by 139.178.89.65 port 35870 Feb 13 19:03:30.143465 sshd-session[5125]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:30.150218 systemd[1]: sshd@26-172.31.27.65:22-139.178.89.65:35870.service: Deactivated successfully. Feb 13 19:03:30.154808 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:03:30.156683 systemd-logind[1936]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:03:30.159899 systemd-logind[1936]: Removed session 27. Feb 13 19:03:35.183966 systemd[1]: Started sshd@27-172.31.27.65:22-139.178.89.65:33404.service - OpenSSH per-connection server daemon (139.178.89.65:33404). Feb 13 19:03:35.373797 sshd[5138]: Accepted publickey for core from 139.178.89.65 port 33404 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:35.376344 sshd-session[5138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:35.386508 systemd-logind[1936]: New session 28 of user core. Feb 13 19:03:35.393590 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:03:35.632760 sshd[5140]: Connection closed by 139.178.89.65 port 33404 Feb 13 19:03:35.633734 sshd-session[5138]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:35.640277 systemd[1]: sshd@27-172.31.27.65:22-139.178.89.65:33404.service: Deactivated successfully. Feb 13 19:03:35.646586 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:03:35.648355 systemd-logind[1936]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:03:35.650171 systemd-logind[1936]: Removed session 28. Feb 13 19:03:35.671868 systemd[1]: Started sshd@28-172.31.27.65:22-139.178.89.65:33408.service - OpenSSH per-connection server daemon (139.178.89.65:33408). Feb 13 19:03:35.864664 sshd[5151]: Accepted publickey for core from 139.178.89.65 port 33408 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:35.867169 sshd-session[5151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:35.876377 systemd-logind[1936]: New session 29 of user core. Feb 13 19:03:35.884580 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 19:03:37.952364 containerd[1951]: time="2025-02-13T19:03:37.952228731Z" level=info msg="StopContainer for \"01662ecd54659eb3bf9acddc2a5829c2a32c6831c4032f5de02adabe3193419e\" with timeout 30 (s)" Feb 13 19:03:37.952364 containerd[1951]: time="2025-02-13T19:03:37.952748283Z" level=info msg="Stop container \"01662ecd54659eb3bf9acddc2a5829c2a32c6831c4032f5de02adabe3193419e\" with signal terminated" Feb 13 19:03:37.987018 systemd[1]: cri-containerd-01662ecd54659eb3bf9acddc2a5829c2a32c6831c4032f5de02adabe3193419e.scope: Deactivated successfully. Feb 13 19:03:37.992710 containerd[1951]: time="2025-02-13T19:03:37.992639835Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:03:38.007811 containerd[1951]: time="2025-02-13T19:03:38.007489667Z" level=info msg="StopContainer for \"ce8dcecc175c91e61779969f38525d0c642c808b811892da73a24b9da494b073\" with timeout 2 (s)" Feb 13 19:03:38.008630 containerd[1951]: time="2025-02-13T19:03:38.008505515Z" level=info msg="Stop container \"ce8dcecc175c91e61779969f38525d0c642c808b811892da73a24b9da494b073\" with signal terminated" Feb 13 19:03:38.024445 systemd-networkd[1865]: lxc_health: Link DOWN Feb 13 19:03:38.024466 systemd-networkd[1865]: lxc_health: Lost carrier Feb 13 19:03:38.057226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01662ecd54659eb3bf9acddc2a5829c2a32c6831c4032f5de02adabe3193419e-rootfs.mount: Deactivated successfully. Feb 13 19:03:38.064657 systemd[1]: cri-containerd-ce8dcecc175c91e61779969f38525d0c642c808b811892da73a24b9da494b073.scope: Deactivated successfully. Feb 13 19:03:38.067737 systemd[1]: cri-containerd-ce8dcecc175c91e61779969f38525d0c642c808b811892da73a24b9da494b073.scope: Consumed 14.512s CPU time, 126.8M memory peak, 144K read from disk, 12.9M written to disk. Feb 13 19:03:38.081948 containerd[1951]: time="2025-02-13T19:03:38.081597564Z" level=info msg="shim disconnected" id=01662ecd54659eb3bf9acddc2a5829c2a32c6831c4032f5de02adabe3193419e namespace=k8s.io Feb 13 19:03:38.081948 containerd[1951]: time="2025-02-13T19:03:38.081676200Z" level=warning msg="cleaning up after shim disconnected" id=01662ecd54659eb3bf9acddc2a5829c2a32c6831c4032f5de02adabe3193419e namespace=k8s.io Feb 13 19:03:38.081948 containerd[1951]: time="2025-02-13T19:03:38.081697464Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:38.116940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce8dcecc175c91e61779969f38525d0c642c808b811892da73a24b9da494b073-rootfs.mount: Deactivated successfully. Feb 13 19:03:38.119608 containerd[1951]: time="2025-02-13T19:03:38.119124504Z" level=info msg="shim disconnected" id=ce8dcecc175c91e61779969f38525d0c642c808b811892da73a24b9da494b073 namespace=k8s.io Feb 13 19:03:38.119608 containerd[1951]: time="2025-02-13T19:03:38.119238156Z" level=warning msg="cleaning up after shim disconnected" id=ce8dcecc175c91e61779969f38525d0c642c808b811892da73a24b9da494b073 namespace=k8s.io Feb 13 19:03:38.119608 containerd[1951]: time="2025-02-13T19:03:38.119259516Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:38.125697 containerd[1951]: time="2025-02-13T19:03:38.125407392Z" level=info msg="StopContainer for \"01662ecd54659eb3bf9acddc2a5829c2a32c6831c4032f5de02adabe3193419e\" returns successfully" Feb 13 19:03:38.126833 containerd[1951]: time="2025-02-13T19:03:38.126609228Z" level=info msg="StopPodSandbox for \"9fb463e00ff30d4566c5dd4883b0558e7b5fb1cf1dda71f99c3806a5d9194308\"" Feb 13 19:03:38.126833 containerd[1951]: time="2025-02-13T19:03:38.126674256Z" level=info msg="Container to stop \"01662ecd54659eb3bf9acddc2a5829c2a32c6831c4032f5de02adabe3193419e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:03:38.135173 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9fb463e00ff30d4566c5dd4883b0558e7b5fb1cf1dda71f99c3806a5d9194308-shm.mount: Deactivated successfully. Feb 13 19:03:38.152449 systemd[1]: cri-containerd-9fb463e00ff30d4566c5dd4883b0558e7b5fb1cf1dda71f99c3806a5d9194308.scope: Deactivated successfully. Feb 13 19:03:38.162046 containerd[1951]: time="2025-02-13T19:03:38.161893500Z" level=info msg="StopContainer for \"ce8dcecc175c91e61779969f38525d0c642c808b811892da73a24b9da494b073\" returns successfully" Feb 13 19:03:38.163105 containerd[1951]: time="2025-02-13T19:03:38.163042668Z" level=info msg="StopPodSandbox for \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\"" Feb 13 19:03:38.163240 containerd[1951]: time="2025-02-13T19:03:38.163112244Z" level=info msg="Container to stop \"557f3b0f139d6fa367aaed5eca77dd7f6b35e2e7f8540d1ea5a80a7dea72c880\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:03:38.163240 containerd[1951]: time="2025-02-13T19:03:38.163140936Z" level=info msg="Container to stop \"424186d9f8bf5016b378feae71fae4239febf5b49de57da5a34030ac50573820\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:03:38.163240 containerd[1951]: time="2025-02-13T19:03:38.163164204Z" level=info msg="Container to stop \"cdc478773de7ca034c1bfd9b59db01ed0fc57fe8b616dadfcba2ae5db6a9a085\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:03:38.163240 containerd[1951]: time="2025-02-13T19:03:38.163185636Z" level=info msg="Container to stop \"700eb9f089a4f900e8d4cd6b8c275764448662bc6f9b3dbba7100f66037b97c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:03:38.163240 containerd[1951]: time="2025-02-13T19:03:38.163206876Z" level=info msg="Container to stop \"ce8dcecc175c91e61779969f38525d0c642c808b811892da73a24b9da494b073\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:03:38.169046 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3-shm.mount: Deactivated successfully. Feb 13 19:03:38.185783 systemd[1]: cri-containerd-a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3.scope: Deactivated successfully. Feb 13 19:03:38.250364 containerd[1951]: time="2025-02-13T19:03:38.250238341Z" level=info msg="shim disconnected" id=a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3 namespace=k8s.io Feb 13 19:03:38.252636 containerd[1951]: time="2025-02-13T19:03:38.251848081Z" level=warning msg="cleaning up after shim disconnected" id=a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3 namespace=k8s.io Feb 13 19:03:38.252636 containerd[1951]: time="2025-02-13T19:03:38.251938549Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:38.252636 containerd[1951]: time="2025-02-13T19:03:38.250982113Z" level=info msg="shim disconnected" id=9fb463e00ff30d4566c5dd4883b0558e7b5fb1cf1dda71f99c3806a5d9194308 namespace=k8s.io Feb 13 19:03:38.252636 containerd[1951]: time="2025-02-13T19:03:38.252050269Z" level=warning msg="cleaning up after shim disconnected" id=9fb463e00ff30d4566c5dd4883b0558e7b5fb1cf1dda71f99c3806a5d9194308 namespace=k8s.io Feb 13 19:03:38.252636 containerd[1951]: time="2025-02-13T19:03:38.252108673Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:38.286581 containerd[1951]: time="2025-02-13T19:03:38.286143709Z" level=info msg="TearDown network for sandbox \"9fb463e00ff30d4566c5dd4883b0558e7b5fb1cf1dda71f99c3806a5d9194308\" successfully" Feb 13 19:03:38.286581 containerd[1951]: time="2025-02-13T19:03:38.286197061Z" level=info msg="StopPodSandbox for \"9fb463e00ff30d4566c5dd4883b0558e7b5fb1cf1dda71f99c3806a5d9194308\" returns successfully" Feb 13 19:03:38.292345 containerd[1951]: time="2025-02-13T19:03:38.292150057Z" level=info msg="TearDown network for sandbox \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\" successfully" Feb 13 19:03:38.292345 containerd[1951]: time="2025-02-13T19:03:38.292198417Z" level=info msg="StopPodSandbox for \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\" returns successfully" Feb 13 19:03:38.340712 kubelet[3494]: I0213 19:03:38.339845 3494 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-host-proc-sys-kernel\") pod \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " Feb 13 19:03:38.340712 kubelet[3494]: I0213 19:03:38.339936 3494 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7-cilium-config-path\") pod \"e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7\" (UID: \"e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7\") " Feb 13 19:03:38.340712 kubelet[3494]: I0213 19:03:38.339980 3494 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-clustermesh-secrets\") pod \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " Feb 13 19:03:38.340712 kubelet[3494]: I0213 19:03:38.340014 3494 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-cni-path\") pod \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " Feb 13 19:03:38.340712 kubelet[3494]: I0213 19:03:38.340049 3494 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-cilium-run\") pod \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " Feb 13 19:03:38.340712 kubelet[3494]: I0213 19:03:38.340178 3494 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-hubble-tls\") pod \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " Feb 13 19:03:38.341958 kubelet[3494]: I0213 19:03:38.340214 3494 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-host-proc-sys-net\") pod \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " Feb 13 19:03:38.341958 kubelet[3494]: I0213 19:03:38.340248 3494 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-lib-modules\") pod \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " Feb 13 19:03:38.341958 kubelet[3494]: I0213 19:03:38.340287 3494 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-cilium-config-path\") pod \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " Feb 13 19:03:38.341958 kubelet[3494]: I0213 19:03:38.340349 3494 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-cilium-cgroup\") pod \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " Feb 13 19:03:38.341958 kubelet[3494]: I0213 19:03:38.340388 3494 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q45wq\" (UniqueName: \"kubernetes.io/projected/e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7-kube-api-access-q45wq\") pod \"e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7\" (UID: \"e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7\") " Feb 13 19:03:38.341958 kubelet[3494]: I0213 19:03:38.340427 3494 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-hostproc\") pod \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " Feb 13 19:03:38.343208 kubelet[3494]: I0213 19:03:38.340463 3494 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24clc\" (UniqueName: \"kubernetes.io/projected/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-kube-api-access-24clc\") pod \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " Feb 13 19:03:38.343208 kubelet[3494]: I0213 19:03:38.340497 3494 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-etc-cni-netd\") pod \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " Feb 13 19:03:38.343208 kubelet[3494]: I0213 19:03:38.340531 3494 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-xtables-lock\") pod \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " Feb 13 19:03:38.343208 kubelet[3494]: I0213 19:03:38.340568 3494 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-bpf-maps\") pod \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\" (UID: \"f99e544e-94ba-4bfe-b934-33d1d8e3a5ac\") " Feb 13 19:03:38.343208 kubelet[3494]: I0213 19:03:38.342590 3494 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" (UID: "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:38.343208 kubelet[3494]: I0213 19:03:38.342682 3494 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" (UID: "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:38.344066 kubelet[3494]: I0213 19:03:38.342766 3494 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" (UID: "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:38.344066 kubelet[3494]: I0213 19:03:38.342813 3494 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-cni-path" (OuterVolumeSpecName: "cni-path") pod "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" (UID: "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:38.344066 kubelet[3494]: I0213 19:03:38.342886 3494 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" (UID: "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:38.344066 kubelet[3494]: I0213 19:03:38.343391 3494 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" (UID: "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:38.344066 kubelet[3494]: I0213 19:03:38.343458 3494 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" (UID: "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:38.344999 kubelet[3494]: I0213 19:03:38.344179 3494 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-hostproc" (OuterVolumeSpecName: "hostproc") pod "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" (UID: "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:38.349772 kubelet[3494]: I0213 19:03:38.349431 3494 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" (UID: "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:38.352721 kubelet[3494]: I0213 19:03:38.350070 3494 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" (UID: "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:03:38.354253 kubelet[3494]: I0213 19:03:38.354184 3494 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" (UID: "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:03:38.363652 kubelet[3494]: I0213 19:03:38.363578 3494 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-kube-api-access-24clc" (OuterVolumeSpecName: "kube-api-access-24clc") pod "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" (UID: "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac"). InnerVolumeSpecName "kube-api-access-24clc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:03:38.364527 kubelet[3494]: I0213 19:03:38.364481 3494 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7" (UID: "e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:03:38.364881 kubelet[3494]: I0213 19:03:38.364846 3494 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" (UID: "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:03:38.365423 kubelet[3494]: I0213 19:03:38.365358 3494 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7-kube-api-access-q45wq" (OuterVolumeSpecName: "kube-api-access-q45wq") pod "e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7" (UID: "e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7"). InnerVolumeSpecName "kube-api-access-q45wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:03:38.365723 kubelet[3494]: I0213 19:03:38.365693 3494 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" (UID: "f99e544e-94ba-4bfe-b934-33d1d8e3a5ac"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:03:38.441682 kubelet[3494]: I0213 19:03:38.441624 3494 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-etc-cni-netd\") on node \"ip-172-31-27-65\" DevicePath \"\"" Feb 13 19:03:38.441849 kubelet[3494]: I0213 19:03:38.441728 3494 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-24clc\" (UniqueName: \"kubernetes.io/projected/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-kube-api-access-24clc\") on node \"ip-172-31-27-65\" DevicePath \"\"" Feb 13 19:03:38.441849 kubelet[3494]: I0213 19:03:38.441786 3494 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-bpf-maps\") on node \"ip-172-31-27-65\" DevicePath \"\"" Feb 13 19:03:38.441849 kubelet[3494]: I0213 19:03:38.441815 3494 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-xtables-lock\") on node \"ip-172-31-27-65\" DevicePath \"\"" Feb 13 19:03:38.442034 kubelet[3494]: I0213 19:03:38.441836 3494 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-host-proc-sys-kernel\") on node \"ip-172-31-27-65\" DevicePath \"\"" Feb 13 19:03:38.442034 kubelet[3494]: I0213 19:03:38.441887 3494 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7-cilium-config-path\") on node \"ip-172-31-27-65\" DevicePath \"\"" Feb 13 19:03:38.442034 kubelet[3494]: I0213 19:03:38.441910 3494 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-cni-path\") on node \"ip-172-31-27-65\" DevicePath \"\"" Feb 13 19:03:38.442034 kubelet[3494]: I0213 19:03:38.441952 3494 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-cilium-run\") on node \"ip-172-31-27-65\" DevicePath \"\"" Feb 13 19:03:38.442034 kubelet[3494]: I0213 19:03:38.441976 3494 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-clustermesh-secrets\") on node \"ip-172-31-27-65\" DevicePath \"\"" Feb 13 19:03:38.442034 kubelet[3494]: I0213 19:03:38.441996 3494 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-host-proc-sys-net\") on node \"ip-172-31-27-65\" DevicePath \"\"" Feb 13 19:03:38.442401 kubelet[3494]: I0213 19:03:38.442037 3494 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-lib-modules\") on node \"ip-172-31-27-65\" DevicePath \"\"" Feb 13 19:03:38.442401 kubelet[3494]: I0213 19:03:38.442062 3494 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-hubble-tls\") on node \"ip-172-31-27-65\" DevicePath \"\"" Feb 13 19:03:38.442401 kubelet[3494]: I0213 19:03:38.442082 3494 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-cilium-config-path\") on node \"ip-172-31-27-65\" DevicePath \"\"" Feb 13 19:03:38.442401 kubelet[3494]: I0213 19:03:38.442124 3494 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-q45wq\" (UniqueName: \"kubernetes.io/projected/e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7-kube-api-access-q45wq\") on node \"ip-172-31-27-65\" DevicePath \"\"" Feb 13 19:03:38.442401 kubelet[3494]: I0213 19:03:38.442151 3494 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-hostproc\") on node \"ip-172-31-27-65\" DevicePath \"\"" Feb 13 19:03:38.442401 kubelet[3494]: I0213 19:03:38.442172 3494 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac-cilium-cgroup\") on node \"ip-172-31-27-65\" DevicePath \"\"" Feb 13 19:03:38.959724 kubelet[3494]: E0213 19:03:38.959511 3494 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:03:38.961617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3-rootfs.mount: Deactivated successfully. Feb 13 19:03:38.962073 systemd[1]: var-lib-kubelet-pods-f99e544e\x2d94ba\x2d4bfe\x2db934\x2d33d1d8e3a5ac-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:03:38.962234 systemd[1]: var-lib-kubelet-pods-f99e544e\x2d94ba\x2d4bfe\x2db934\x2d33d1d8e3a5ac-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:03:38.962428 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fb463e00ff30d4566c5dd4883b0558e7b5fb1cf1dda71f99c3806a5d9194308-rootfs.mount: Deactivated successfully. Feb 13 19:03:38.962568 systemd[1]: var-lib-kubelet-pods-e3a88d9a\x2d6590\x2d4d1a\x2db4c6\x2d6608e2bb92e7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq45wq.mount: Deactivated successfully. Feb 13 19:03:38.962704 systemd[1]: var-lib-kubelet-pods-f99e544e\x2d94ba\x2d4bfe\x2db934\x2d33d1d8e3a5ac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d24clc.mount: Deactivated successfully. Feb 13 19:03:39.244426 kubelet[3494]: I0213 19:03:39.242266 3494 scope.go:117] "RemoveContainer" containerID="ce8dcecc175c91e61779969f38525d0c642c808b811892da73a24b9da494b073" Feb 13 19:03:39.250934 containerd[1951]: time="2025-02-13T19:03:39.250349606Z" level=info msg="RemoveContainer for \"ce8dcecc175c91e61779969f38525d0c642c808b811892da73a24b9da494b073\"" Feb 13 19:03:39.261809 systemd[1]: Removed slice kubepods-burstable-podf99e544e_94ba_4bfe_b934_33d1d8e3a5ac.slice - libcontainer container kubepods-burstable-podf99e544e_94ba_4bfe_b934_33d1d8e3a5ac.slice. Feb 13 19:03:39.262067 systemd[1]: kubepods-burstable-podf99e544e_94ba_4bfe_b934_33d1d8e3a5ac.slice: Consumed 14.664s CPU time, 127.3M memory peak, 144K read from disk, 12.9M written to disk. Feb 13 19:03:39.266896 containerd[1951]: time="2025-02-13T19:03:39.266663498Z" level=info msg="RemoveContainer for \"ce8dcecc175c91e61779969f38525d0c642c808b811892da73a24b9da494b073\" returns successfully" Feb 13 19:03:39.268714 kubelet[3494]: I0213 19:03:39.268666 3494 scope.go:117] "RemoveContainer" containerID="cdc478773de7ca034c1bfd9b59db01ed0fc57fe8b616dadfcba2ae5db6a9a085" Feb 13 19:03:39.271193 systemd[1]: Removed slice kubepods-besteffort-pode3a88d9a_6590_4d1a_b4c6_6608e2bb92e7.slice - libcontainer container kubepods-besteffort-pode3a88d9a_6590_4d1a_b4c6_6608e2bb92e7.slice. Feb 13 19:03:39.272255 containerd[1951]: time="2025-02-13T19:03:39.271255394Z" level=info msg="RemoveContainer for \"cdc478773de7ca034c1bfd9b59db01ed0fc57fe8b616dadfcba2ae5db6a9a085\"" Feb 13 19:03:39.280409 containerd[1951]: time="2025-02-13T19:03:39.280207070Z" level=info msg="RemoveContainer for \"cdc478773de7ca034c1bfd9b59db01ed0fc57fe8b616dadfcba2ae5db6a9a085\" returns successfully" Feb 13 19:03:39.280611 kubelet[3494]: I0213 19:03:39.280573 3494 scope.go:117] "RemoveContainer" containerID="424186d9f8bf5016b378feae71fae4239febf5b49de57da5a34030ac50573820" Feb 13 19:03:39.283877 containerd[1951]: time="2025-02-13T19:03:39.283820666Z" level=info msg="RemoveContainer for \"424186d9f8bf5016b378feae71fae4239febf5b49de57da5a34030ac50573820\"" Feb 13 19:03:39.293092 containerd[1951]: time="2025-02-13T19:03:39.292925330Z" level=info msg="RemoveContainer for \"424186d9f8bf5016b378feae71fae4239febf5b49de57da5a34030ac50573820\" returns successfully" Feb 13 19:03:39.293647 kubelet[3494]: I0213 19:03:39.293506 3494 scope.go:117] "RemoveContainer" containerID="557f3b0f139d6fa367aaed5eca77dd7f6b35e2e7f8540d1ea5a80a7dea72c880" Feb 13 19:03:39.296025 containerd[1951]: time="2025-02-13T19:03:39.295617422Z" level=info msg="RemoveContainer for \"557f3b0f139d6fa367aaed5eca77dd7f6b35e2e7f8540d1ea5a80a7dea72c880\"" Feb 13 19:03:39.304619 containerd[1951]: time="2025-02-13T19:03:39.303814358Z" level=info msg="RemoveContainer for \"557f3b0f139d6fa367aaed5eca77dd7f6b35e2e7f8540d1ea5a80a7dea72c880\" returns successfully" Feb 13 19:03:39.304804 kubelet[3494]: I0213 19:03:39.304373 3494 scope.go:117] "RemoveContainer" containerID="700eb9f089a4f900e8d4cd6b8c275764448662bc6f9b3dbba7100f66037b97c4" Feb 13 19:03:39.308290 containerd[1951]: time="2025-02-13T19:03:39.308234618Z" level=info msg="RemoveContainer for \"700eb9f089a4f900e8d4cd6b8c275764448662bc6f9b3dbba7100f66037b97c4\"" Feb 13 19:03:39.323403 containerd[1951]: time="2025-02-13T19:03:39.323330510Z" level=info msg="RemoveContainer for \"700eb9f089a4f900e8d4cd6b8c275764448662bc6f9b3dbba7100f66037b97c4\" returns successfully" Feb 13 19:03:39.323789 kubelet[3494]: I0213 19:03:39.323706 3494 scope.go:117] "RemoveContainer" containerID="01662ecd54659eb3bf9acddc2a5829c2a32c6831c4032f5de02adabe3193419e" Feb 13 19:03:39.325734 containerd[1951]: time="2025-02-13T19:03:39.325672874Z" level=info msg="RemoveContainer for \"01662ecd54659eb3bf9acddc2a5829c2a32c6831c4032f5de02adabe3193419e\"" Feb 13 19:03:39.337787 containerd[1951]: time="2025-02-13T19:03:39.337551710Z" level=info msg="RemoveContainer for \"01662ecd54659eb3bf9acddc2a5829c2a32c6831c4032f5de02adabe3193419e\" returns successfully" Feb 13 19:03:39.721298 kubelet[3494]: I0213 19:03:39.721217 3494 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7" path="/var/lib/kubelet/pods/e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7/volumes" Feb 13 19:03:39.722285 kubelet[3494]: I0213 19:03:39.722231 3494 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" path="/var/lib/kubelet/pods/f99e544e-94ba-4bfe-b934-33d1d8e3a5ac/volumes" Feb 13 19:03:39.881978 sshd[5153]: Connection closed by 139.178.89.65 port 33408 Feb 13 19:03:39.882519 sshd-session[5151]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:39.889647 systemd-logind[1936]: Session 29 logged out. Waiting for processes to exit. Feb 13 19:03:39.891385 systemd[1]: sshd@28-172.31.27.65:22-139.178.89.65:33408.service: Deactivated successfully. Feb 13 19:03:39.895266 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 19:03:39.896408 systemd[1]: session-29.scope: Consumed 1.305s CPU time, 23.8M memory peak. Feb 13 19:03:39.898095 systemd-logind[1936]: Removed session 29. Feb 13 19:03:39.922836 systemd[1]: Started sshd@29-172.31.27.65:22-139.178.89.65:33424.service - OpenSSH per-connection server daemon (139.178.89.65:33424). Feb 13 19:03:40.111394 sshd[5317]: Accepted publickey for core from 139.178.89.65 port 33424 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:40.113540 sshd-session[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:40.122814 systemd-logind[1936]: New session 30 of user core. Feb 13 19:03:40.133905 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 19:03:40.570132 ntpd[1928]: Deleting interface #11 lxc_health, fe80::f807:bbff:fe9c:ddbd%8#123, interface stats: received=0, sent=0, dropped=0, active_time=77 secs Feb 13 19:03:40.570661 ntpd[1928]: 13 Feb 19:03:40 ntpd[1928]: Deleting interface #11 lxc_health, fe80::f807:bbff:fe9c:ddbd%8#123, interface stats: received=0, sent=0, dropped=0, active_time=77 secs Feb 13 19:03:41.447546 sshd[5319]: Connection closed by 139.178.89.65 port 33424 Feb 13 19:03:41.449278 sshd-session[5317]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:41.457819 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 19:03:41.458243 systemd[1]: session-30.scope: Consumed 1.123s CPU time, 23.6M memory peak. Feb 13 19:03:41.461536 systemd[1]: sshd@29-172.31.27.65:22-139.178.89.65:33424.service: Deactivated successfully. Feb 13 19:03:41.471929 systemd-logind[1936]: Session 30 logged out. Waiting for processes to exit. Feb 13 19:03:41.482989 kubelet[3494]: I0213 19:03:41.482360 3494 topology_manager.go:215] "Topology Admit Handler" podUID="2f90d19b-cf21-4f5a-b11d-efbaf42766dc" podNamespace="kube-system" podName="cilium-lxv7m" Feb 13 19:03:41.482989 kubelet[3494]: E0213 19:03:41.482484 3494 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" containerName="clean-cilium-state" Feb 13 19:03:41.482989 kubelet[3494]: E0213 19:03:41.482560 3494 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" containerName="mount-cgroup" Feb 13 19:03:41.482989 kubelet[3494]: E0213 19:03:41.482579 3494 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" containerName="apply-sysctl-overwrites" Feb 13 19:03:41.482989 kubelet[3494]: E0213 19:03:41.482660 3494 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" containerName="mount-bpf-fs" Feb 13 19:03:41.482989 kubelet[3494]: E0213 19:03:41.482679 3494 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7" containerName="cilium-operator" Feb 13 19:03:41.482989 kubelet[3494]: E0213 19:03:41.482725 3494 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" containerName="cilium-agent" Feb 13 19:03:41.482989 kubelet[3494]: I0213 19:03:41.482770 3494 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a88d9a-6590-4d1a-b4c6-6608e2bb92e7" containerName="cilium-operator" Feb 13 19:03:41.482989 kubelet[3494]: I0213 19:03:41.482847 3494 memory_manager.go:354] "RemoveStaleState removing state" podUID="f99e544e-94ba-4bfe-b934-33d1d8e3a5ac" containerName="cilium-agent" Feb 13 19:03:41.501185 systemd-logind[1936]: Removed session 30. Feb 13 19:03:41.505151 systemd[1]: Started sshd@30-172.31.27.65:22-139.178.89.65:33436.service - OpenSSH per-connection server daemon (139.178.89.65:33436). Feb 13 19:03:41.538485 systemd[1]: Created slice kubepods-burstable-pod2f90d19b_cf21_4f5a_b11d_efbaf42766dc.slice - libcontainer container kubepods-burstable-pod2f90d19b_cf21_4f5a_b11d_efbaf42766dc.slice. Feb 13 19:03:41.562320 kubelet[3494]: I0213 19:03:41.562033 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f90d19b-cf21-4f5a-b11d-efbaf42766dc-etc-cni-netd\") pod \"cilium-lxv7m\" (UID: \"2f90d19b-cf21-4f5a-b11d-efbaf42766dc\") " pod="kube-system/cilium-lxv7m" Feb 13 19:03:41.562479 kubelet[3494]: I0213 19:03:41.562350 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2f90d19b-cf21-4f5a-b11d-efbaf42766dc-host-proc-sys-net\") pod \"cilium-lxv7m\" (UID: \"2f90d19b-cf21-4f5a-b11d-efbaf42766dc\") " pod="kube-system/cilium-lxv7m" Feb 13 19:03:41.562479 kubelet[3494]: I0213 19:03:41.562404 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhfzt\" (UniqueName: \"kubernetes.io/projected/2f90d19b-cf21-4f5a-b11d-efbaf42766dc-kube-api-access-nhfzt\") pod \"cilium-lxv7m\" (UID: \"2f90d19b-cf21-4f5a-b11d-efbaf42766dc\") " pod="kube-system/cilium-lxv7m" Feb 13 19:03:41.562479 kubelet[3494]: I0213 19:03:41.562443 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2f90d19b-cf21-4f5a-b11d-efbaf42766dc-bpf-maps\") pod \"cilium-lxv7m\" (UID: \"2f90d19b-cf21-4f5a-b11d-efbaf42766dc\") " pod="kube-system/cilium-lxv7m" Feb 13 19:03:41.562680 kubelet[3494]: I0213 19:03:41.562479 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2f90d19b-cf21-4f5a-b11d-efbaf42766dc-cilium-ipsec-secrets\") pod \"cilium-lxv7m\" (UID: \"2f90d19b-cf21-4f5a-b11d-efbaf42766dc\") " pod="kube-system/cilium-lxv7m" Feb 13 19:03:41.562680 kubelet[3494]: I0213 19:03:41.562518 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2f90d19b-cf21-4f5a-b11d-efbaf42766dc-host-proc-sys-kernel\") pod \"cilium-lxv7m\" (UID: \"2f90d19b-cf21-4f5a-b11d-efbaf42766dc\") " pod="kube-system/cilium-lxv7m" Feb 13 19:03:41.562680 kubelet[3494]: I0213 19:03:41.562551 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2f90d19b-cf21-4f5a-b11d-efbaf42766dc-hubble-tls\") pod \"cilium-lxv7m\" (UID: \"2f90d19b-cf21-4f5a-b11d-efbaf42766dc\") " pod="kube-system/cilium-lxv7m" Feb 13 19:03:41.562680 kubelet[3494]: I0213 19:03:41.562589 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2f90d19b-cf21-4f5a-b11d-efbaf42766dc-cilium-run\") pod \"cilium-lxv7m\" (UID: \"2f90d19b-cf21-4f5a-b11d-efbaf42766dc\") " pod="kube-system/cilium-lxv7m" Feb 13 19:03:41.562680 kubelet[3494]: I0213 19:03:41.562624 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f90d19b-cf21-4f5a-b11d-efbaf42766dc-xtables-lock\") pod \"cilium-lxv7m\" (UID: \"2f90d19b-cf21-4f5a-b11d-efbaf42766dc\") " pod="kube-system/cilium-lxv7m" Feb 13 19:03:41.562680 kubelet[3494]: I0213 19:03:41.562661 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2f90d19b-cf21-4f5a-b11d-efbaf42766dc-cilium-cgroup\") pod \"cilium-lxv7m\" (UID: \"2f90d19b-cf21-4f5a-b11d-efbaf42766dc\") " pod="kube-system/cilium-lxv7m" Feb 13 19:03:41.562990 kubelet[3494]: I0213 19:03:41.562701 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2f90d19b-cf21-4f5a-b11d-efbaf42766dc-hostproc\") pod \"cilium-lxv7m\" (UID: \"2f90d19b-cf21-4f5a-b11d-efbaf42766dc\") " pod="kube-system/cilium-lxv7m" Feb 13 19:03:41.562990 kubelet[3494]: I0213 19:03:41.562737 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2f90d19b-cf21-4f5a-b11d-efbaf42766dc-cni-path\") pod \"cilium-lxv7m\" (UID: \"2f90d19b-cf21-4f5a-b11d-efbaf42766dc\") " pod="kube-system/cilium-lxv7m" Feb 13 19:03:41.562990 kubelet[3494]: I0213 19:03:41.562776 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2f90d19b-cf21-4f5a-b11d-efbaf42766dc-cilium-config-path\") pod \"cilium-lxv7m\" (UID: \"2f90d19b-cf21-4f5a-b11d-efbaf42766dc\") " pod="kube-system/cilium-lxv7m" Feb 13 19:03:41.562990 kubelet[3494]: I0213 19:03:41.562816 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f90d19b-cf21-4f5a-b11d-efbaf42766dc-lib-modules\") pod \"cilium-lxv7m\" (UID: \"2f90d19b-cf21-4f5a-b11d-efbaf42766dc\") " pod="kube-system/cilium-lxv7m" Feb 13 19:03:41.565354 kubelet[3494]: I0213 19:03:41.564516 3494 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2f90d19b-cf21-4f5a-b11d-efbaf42766dc-clustermesh-secrets\") pod \"cilium-lxv7m\" (UID: \"2f90d19b-cf21-4f5a-b11d-efbaf42766dc\") " pod="kube-system/cilium-lxv7m" Feb 13 19:03:41.742487 sshd[5328]: Accepted publickey for core from 139.178.89.65 port 33436 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:41.744494 sshd-session[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:41.753118 systemd-logind[1936]: New session 31 of user core. Feb 13 19:03:41.759616 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 19:03:41.851282 containerd[1951]: time="2025-02-13T19:03:41.851184619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lxv7m,Uid:2f90d19b-cf21-4f5a-b11d-efbaf42766dc,Namespace:kube-system,Attempt:0,}" Feb 13 19:03:41.885363 sshd[5335]: Connection closed by 139.178.89.65 port 33436 Feb 13 19:03:41.888111 sshd-session[5328]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:41.894932 systemd[1]: sshd@30-172.31.27.65:22-139.178.89.65:33436.service: Deactivated successfully. Feb 13 19:03:41.899424 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 19:03:41.901995 containerd[1951]: time="2025-02-13T19:03:41.901763299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:41.901995 containerd[1951]: time="2025-02-13T19:03:41.901891855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:41.902526 containerd[1951]: time="2025-02-13T19:03:41.901931275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:41.902526 containerd[1951]: time="2025-02-13T19:03:41.902100907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:41.905802 systemd-logind[1936]: Session 31 logged out. Waiting for processes to exit. Feb 13 19:03:41.943882 systemd[1]: Started sshd@31-172.31.27.65:22-139.178.89.65:33450.service - OpenSSH per-connection server daemon (139.178.89.65:33450). Feb 13 19:03:41.947241 systemd-logind[1936]: Removed session 31. Feb 13 19:03:41.964631 systemd[1]: Started cri-containerd-6afa0b4163d5eb01f9be4a123174c75a70d7a6a6bb66c2eae8fe4e28ec761847.scope - libcontainer container 6afa0b4163d5eb01f9be4a123174c75a70d7a6a6bb66c2eae8fe4e28ec761847. Feb 13 19:03:42.009997 containerd[1951]: time="2025-02-13T19:03:42.009822543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lxv7m,Uid:2f90d19b-cf21-4f5a-b11d-efbaf42766dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"6afa0b4163d5eb01f9be4a123174c75a70d7a6a6bb66c2eae8fe4e28ec761847\"" Feb 13 19:03:42.016724 containerd[1951]: time="2025-02-13T19:03:42.016520547Z" level=info msg="CreateContainer within sandbox \"6afa0b4163d5eb01f9be4a123174c75a70d7a6a6bb66c2eae8fe4e28ec761847\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:03:42.039917 containerd[1951]: time="2025-02-13T19:03:42.039835743Z" level=info msg="CreateContainer within sandbox \"6afa0b4163d5eb01f9be4a123174c75a70d7a6a6bb66c2eae8fe4e28ec761847\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c443c625d34d9193d0f28ddc19668b87928d586ddd4d4209524c1c1fb01c0c39\"" Feb 13 19:03:42.040912 containerd[1951]: time="2025-02-13T19:03:42.040651383Z" level=info msg="StartContainer for \"c443c625d34d9193d0f28ddc19668b87928d586ddd4d4209524c1c1fb01c0c39\"" Feb 13 19:03:42.084635 systemd[1]: Started cri-containerd-c443c625d34d9193d0f28ddc19668b87928d586ddd4d4209524c1c1fb01c0c39.scope - libcontainer container c443c625d34d9193d0f28ddc19668b87928d586ddd4d4209524c1c1fb01c0c39. Feb 13 19:03:42.134674 containerd[1951]: time="2025-02-13T19:03:42.134605936Z" level=info msg="StartContainer for \"c443c625d34d9193d0f28ddc19668b87928d586ddd4d4209524c1c1fb01c0c39\" returns successfully" Feb 13 19:03:42.139007 sshd[5369]: Accepted publickey for core from 139.178.89.65 port 33450 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:42.142399 sshd-session[5369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:42.151259 systemd[1]: cri-containerd-c443c625d34d9193d0f28ddc19668b87928d586ddd4d4209524c1c1fb01c0c39.scope: Deactivated successfully. Feb 13 19:03:42.161841 systemd-logind[1936]: New session 32 of user core. Feb 13 19:03:42.170970 systemd[1]: Started session-32.scope - Session 32 of User core. Feb 13 19:03:42.214750 containerd[1951]: time="2025-02-13T19:03:42.214672312Z" level=info msg="shim disconnected" id=c443c625d34d9193d0f28ddc19668b87928d586ddd4d4209524c1c1fb01c0c39 namespace=k8s.io Feb 13 19:03:42.215241 containerd[1951]: time="2025-02-13T19:03:42.215209372Z" level=warning msg="cleaning up after shim disconnected" id=c443c625d34d9193d0f28ddc19668b87928d586ddd4d4209524c1c1fb01c0c39 namespace=k8s.io Feb 13 19:03:42.215441 containerd[1951]: time="2025-02-13T19:03:42.215397880Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:42.280007 containerd[1951]: time="2025-02-13T19:03:42.279863969Z" level=info msg="CreateContainer within sandbox \"6afa0b4163d5eb01f9be4a123174c75a70d7a6a6bb66c2eae8fe4e28ec761847\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:03:42.314388 containerd[1951]: time="2025-02-13T19:03:42.309297689Z" level=info msg="CreateContainer within sandbox \"6afa0b4163d5eb01f9be4a123174c75a70d7a6a6bb66c2eae8fe4e28ec761847\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8a854ee1b7e3ee9a3436945132ac08274d02badb89933fe5525e8542f77a7d17\"" Feb 13 19:03:42.319925 containerd[1951]: time="2025-02-13T19:03:42.319861637Z" level=info msg="StartContainer for \"8a854ee1b7e3ee9a3436945132ac08274d02badb89933fe5525e8542f77a7d17\"" Feb 13 19:03:42.407637 systemd[1]: Started cri-containerd-8a854ee1b7e3ee9a3436945132ac08274d02badb89933fe5525e8542f77a7d17.scope - libcontainer container 8a854ee1b7e3ee9a3436945132ac08274d02badb89933fe5525e8542f77a7d17. Feb 13 19:03:42.481845 containerd[1951]: time="2025-02-13T19:03:42.481476822Z" level=info msg="StartContainer for \"8a854ee1b7e3ee9a3436945132ac08274d02badb89933fe5525e8542f77a7d17\" returns successfully" Feb 13 19:03:42.505223 systemd[1]: cri-containerd-8a854ee1b7e3ee9a3436945132ac08274d02badb89933fe5525e8542f77a7d17.scope: Deactivated successfully. Feb 13 19:03:42.548196 containerd[1951]: time="2025-02-13T19:03:42.548008170Z" level=info msg="shim disconnected" id=8a854ee1b7e3ee9a3436945132ac08274d02badb89933fe5525e8542f77a7d17 namespace=k8s.io Feb 13 19:03:42.548196 containerd[1951]: time="2025-02-13T19:03:42.548081082Z" level=warning msg="cleaning up after shim disconnected" id=8a854ee1b7e3ee9a3436945132ac08274d02badb89933fe5525e8542f77a7d17 namespace=k8s.io Feb 13 19:03:42.548196 containerd[1951]: time="2025-02-13T19:03:42.548100654Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:43.279028 containerd[1951]: time="2025-02-13T19:03:43.278951154Z" level=info msg="CreateContainer within sandbox \"6afa0b4163d5eb01f9be4a123174c75a70d7a6a6bb66c2eae8fe4e28ec761847\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:03:43.319379 containerd[1951]: time="2025-02-13T19:03:43.318667530Z" level=info msg="CreateContainer within sandbox \"6afa0b4163d5eb01f9be4a123174c75a70d7a6a6bb66c2eae8fe4e28ec761847\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9ed51261211b6b02fd4ff8df0c3cc6a7d012602386dce0cf12ca7ef0ff11b960\"" Feb 13 19:03:43.319581 containerd[1951]: time="2025-02-13T19:03:43.319468194Z" level=info msg="StartContainer for \"9ed51261211b6b02fd4ff8df0c3cc6a7d012602386dce0cf12ca7ef0ff11b960\"" Feb 13 19:03:43.379717 systemd[1]: Started cri-containerd-9ed51261211b6b02fd4ff8df0c3cc6a7d012602386dce0cf12ca7ef0ff11b960.scope - libcontainer container 9ed51261211b6b02fd4ff8df0c3cc6a7d012602386dce0cf12ca7ef0ff11b960. Feb 13 19:03:43.439168 containerd[1951]: time="2025-02-13T19:03:43.439030098Z" level=info msg="StartContainer for \"9ed51261211b6b02fd4ff8df0c3cc6a7d012602386dce0cf12ca7ef0ff11b960\" returns successfully" Feb 13 19:03:43.442658 systemd[1]: cri-containerd-9ed51261211b6b02fd4ff8df0c3cc6a7d012602386dce0cf12ca7ef0ff11b960.scope: Deactivated successfully. Feb 13 19:03:43.490012 containerd[1951]: time="2025-02-13T19:03:43.489747463Z" level=info msg="shim disconnected" id=9ed51261211b6b02fd4ff8df0c3cc6a7d012602386dce0cf12ca7ef0ff11b960 namespace=k8s.io Feb 13 19:03:43.490012 containerd[1951]: time="2025-02-13T19:03:43.489819211Z" level=warning msg="cleaning up after shim disconnected" id=9ed51261211b6b02fd4ff8df0c3cc6a7d012602386dce0cf12ca7ef0ff11b960 namespace=k8s.io Feb 13 19:03:43.490012 containerd[1951]: time="2025-02-13T19:03:43.489838375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:43.674629 systemd[1]: run-containerd-runc-k8s.io-9ed51261211b6b02fd4ff8df0c3cc6a7d012602386dce0cf12ca7ef0ff11b960-runc.NC6rxS.mount: Deactivated successfully. Feb 13 19:03:43.674811 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ed51261211b6b02fd4ff8df0c3cc6a7d012602386dce0cf12ca7ef0ff11b960-rootfs.mount: Deactivated successfully. Feb 13 19:03:43.703222 containerd[1951]: time="2025-02-13T19:03:43.703080752Z" level=info msg="StopPodSandbox for \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\"" Feb 13 19:03:43.703494 containerd[1951]: time="2025-02-13T19:03:43.703230752Z" level=info msg="TearDown network for sandbox \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\" successfully" Feb 13 19:03:43.703494 containerd[1951]: time="2025-02-13T19:03:43.703253684Z" level=info msg="StopPodSandbox for \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\" returns successfully" Feb 13 19:03:43.704681 containerd[1951]: time="2025-02-13T19:03:43.704229812Z" level=info msg="RemovePodSandbox for \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\"" Feb 13 19:03:43.704681 containerd[1951]: time="2025-02-13T19:03:43.704280032Z" level=info msg="Forcibly stopping sandbox \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\"" Feb 13 19:03:43.704681 containerd[1951]: time="2025-02-13T19:03:43.704419124Z" level=info msg="TearDown network for sandbox \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\" successfully" Feb 13 19:03:43.710507 containerd[1951]: time="2025-02-13T19:03:43.710432132Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:03:43.710669 containerd[1951]: time="2025-02-13T19:03:43.710521496Z" level=info msg="RemovePodSandbox \"a0f4dfcbd43cb101852d2a2bfb09bf214df6155a203824bd3fc9c54dd1f2a4f3\" returns successfully" Feb 13 19:03:43.712004 containerd[1951]: time="2025-02-13T19:03:43.711764972Z" level=info msg="StopPodSandbox for \"9fb463e00ff30d4566c5dd4883b0558e7b5fb1cf1dda71f99c3806a5d9194308\"" Feb 13 19:03:43.712004 containerd[1951]: time="2025-02-13T19:03:43.711900596Z" level=info msg="TearDown network for sandbox \"9fb463e00ff30d4566c5dd4883b0558e7b5fb1cf1dda71f99c3806a5d9194308\" successfully" Feb 13 19:03:43.712004 containerd[1951]: time="2025-02-13T19:03:43.711921344Z" level=info msg="StopPodSandbox for \"9fb463e00ff30d4566c5dd4883b0558e7b5fb1cf1dda71f99c3806a5d9194308\" returns successfully" Feb 13 19:03:43.712486 containerd[1951]: time="2025-02-13T19:03:43.712428452Z" level=info msg="RemovePodSandbox for \"9fb463e00ff30d4566c5dd4883b0558e7b5fb1cf1dda71f99c3806a5d9194308\"" Feb 13 19:03:43.712579 containerd[1951]: time="2025-02-13T19:03:43.712482980Z" level=info msg="Forcibly stopping sandbox \"9fb463e00ff30d4566c5dd4883b0558e7b5fb1cf1dda71f99c3806a5d9194308\"" Feb 13 19:03:43.712643 containerd[1951]: time="2025-02-13T19:03:43.712590620Z" level=info msg="TearDown network for sandbox \"9fb463e00ff30d4566c5dd4883b0558e7b5fb1cf1dda71f99c3806a5d9194308\" successfully" Feb 13 19:03:43.719132 containerd[1951]: time="2025-02-13T19:03:43.719050484Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9fb463e00ff30d4566c5dd4883b0558e7b5fb1cf1dda71f99c3806a5d9194308\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:03:43.719393 containerd[1951]: time="2025-02-13T19:03:43.719141936Z" level=info msg="RemovePodSandbox \"9fb463e00ff30d4566c5dd4883b0558e7b5fb1cf1dda71f99c3806a5d9194308\" returns successfully" Feb 13 19:03:43.961774 kubelet[3494]: E0213 19:03:43.961527 3494 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:03:44.285141 containerd[1951]: time="2025-02-13T19:03:44.284471659Z" level=info msg="CreateContainer within sandbox \"6afa0b4163d5eb01f9be4a123174c75a70d7a6a6bb66c2eae8fe4e28ec761847\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:03:44.321503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount826071383.mount: Deactivated successfully. Feb 13 19:03:44.323086 containerd[1951]: time="2025-02-13T19:03:44.322930927Z" level=info msg="CreateContainer within sandbox \"6afa0b4163d5eb01f9be4a123174c75a70d7a6a6bb66c2eae8fe4e28ec761847\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"33b60d72222b0067c33e8f3fbef0589dbac3b11565f650c7cda8aba3df7b193f\"" Feb 13 19:03:44.324440 containerd[1951]: time="2025-02-13T19:03:44.324381967Z" level=info msg="StartContainer for \"33b60d72222b0067c33e8f3fbef0589dbac3b11565f650c7cda8aba3df7b193f\"" Feb 13 19:03:44.373734 systemd[1]: Started cri-containerd-33b60d72222b0067c33e8f3fbef0589dbac3b11565f650c7cda8aba3df7b193f.scope - libcontainer container 33b60d72222b0067c33e8f3fbef0589dbac3b11565f650c7cda8aba3df7b193f. Feb 13 19:03:44.419644 systemd[1]: cri-containerd-33b60d72222b0067c33e8f3fbef0589dbac3b11565f650c7cda8aba3df7b193f.scope: Deactivated successfully. Feb 13 19:03:44.425894 containerd[1951]: time="2025-02-13T19:03:44.425723443Z" level=info msg="StartContainer for \"33b60d72222b0067c33e8f3fbef0589dbac3b11565f650c7cda8aba3df7b193f\" returns successfully" Feb 13 19:03:44.468988 containerd[1951]: time="2025-02-13T19:03:44.468775736Z" level=info msg="shim disconnected" id=33b60d72222b0067c33e8f3fbef0589dbac3b11565f650c7cda8aba3df7b193f namespace=k8s.io Feb 13 19:03:44.468988 containerd[1951]: time="2025-02-13T19:03:44.468876164Z" level=warning msg="cleaning up after shim disconnected" id=33b60d72222b0067c33e8f3fbef0589dbac3b11565f650c7cda8aba3df7b193f namespace=k8s.io Feb 13 19:03:44.468988 containerd[1951]: time="2025-02-13T19:03:44.468903068Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:44.674916 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33b60d72222b0067c33e8f3fbef0589dbac3b11565f650c7cda8aba3df7b193f-rootfs.mount: Deactivated successfully. Feb 13 19:03:45.300339 containerd[1951]: time="2025-02-13T19:03:45.300260972Z" level=info msg="CreateContainer within sandbox \"6afa0b4163d5eb01f9be4a123174c75a70d7a6a6bb66c2eae8fe4e28ec761847\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:03:45.331750 containerd[1951]: time="2025-02-13T19:03:45.331436348Z" level=info msg="CreateContainer within sandbox \"6afa0b4163d5eb01f9be4a123174c75a70d7a6a6bb66c2eae8fe4e28ec761847\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0e0c102379f46425ef231fdf77f52de505c41ecbbfb59e97d378b51e4114f6ee\"" Feb 13 19:03:45.335359 containerd[1951]: time="2025-02-13T19:03:45.334353428Z" level=info msg="StartContainer for \"0e0c102379f46425ef231fdf77f52de505c41ecbbfb59e97d378b51e4114f6ee\"" Feb 13 19:03:45.395579 systemd[1]: run-containerd-runc-k8s.io-0e0c102379f46425ef231fdf77f52de505c41ecbbfb59e97d378b51e4114f6ee-runc.4kK2ql.mount: Deactivated successfully. Feb 13 19:03:45.409609 systemd[1]: Started cri-containerd-0e0c102379f46425ef231fdf77f52de505c41ecbbfb59e97d378b51e4114f6ee.scope - libcontainer container 0e0c102379f46425ef231fdf77f52de505c41ecbbfb59e97d378b51e4114f6ee. Feb 13 19:03:45.464880 containerd[1951]: time="2025-02-13T19:03:45.464808392Z" level=info msg="StartContainer for \"0e0c102379f46425ef231fdf77f52de505c41ecbbfb59e97d378b51e4114f6ee\" returns successfully" Feb 13 19:03:46.384944 kubelet[3494]: I0213 19:03:46.384190 3494 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lxv7m" podStartSLOduration=5.384167781 podStartE2EDuration="5.384167781s" podCreationTimestamp="2025-02-13 19:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:03:46.383939373 +0000 UTC m=+122.964224904" watchObservedRunningTime="2025-02-13 19:03:46.384167781 +0000 UTC m=+122.964453240" Feb 13 19:03:46.393361 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:03:46.637822 kubelet[3494]: I0213 19:03:46.636489 3494 setters.go:580] "Node became not ready" node="ip-172-31-27-65" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:03:46Z","lastTransitionTime":"2025-02-13T19:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:03:50.656239 systemd-networkd[1865]: lxc_health: Link UP Feb 13 19:03:50.669990 (udev-worker)[6187]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:03:50.673199 systemd-networkd[1865]: lxc_health: Gained carrier Feb 13 19:03:51.203812 systemd[1]: run-containerd-runc-k8s.io-0e0c102379f46425ef231fdf77f52de505c41ecbbfb59e97d378b51e4114f6ee-runc.ibOICc.mount: Deactivated successfully. Feb 13 19:03:52.143575 systemd-networkd[1865]: lxc_health: Gained IPv6LL Feb 13 19:03:53.614932 systemd[1]: run-containerd-runc-k8s.io-0e0c102379f46425ef231fdf77f52de505c41ecbbfb59e97d378b51e4114f6ee-runc.bnGQOs.mount: Deactivated successfully. Feb 13 19:03:54.570100 ntpd[1928]: Listen normally on 14 lxc_health [fe80::8422:94ff:fe50:52e8%14]:123 Feb 13 19:03:54.571724 ntpd[1928]: 13 Feb 19:03:54 ntpd[1928]: Listen normally on 14 lxc_health [fe80::8422:94ff:fe50:52e8%14]:123 Feb 13 19:03:58.304660 sshd[5437]: Connection closed by 139.178.89.65 port 33450 Feb 13 19:03:58.305736 sshd-session[5369]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:58.313570 systemd-logind[1936]: Session 32 logged out. Waiting for processes to exit. Feb 13 19:03:58.318489 systemd[1]: sshd@31-172.31.27.65:22-139.178.89.65:33450.service: Deactivated successfully. Feb 13 19:03:58.326140 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 19:03:58.328516 systemd-logind[1936]: Removed session 32. Feb 13 19:04:13.061926 systemd[1]: cri-containerd-c71ef8ca289712deab225874d3daf5b3650d188825c22c39b09e89f1881b41c6.scope: Deactivated successfully. Feb 13 19:04:13.065594 systemd[1]: cri-containerd-c71ef8ca289712deab225874d3daf5b3650d188825c22c39b09e89f1881b41c6.scope: Consumed 4.575s CPU time, 58.6M memory peak. Feb 13 19:04:13.103292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c71ef8ca289712deab225874d3daf5b3650d188825c22c39b09e89f1881b41c6-rootfs.mount: Deactivated successfully. Feb 13 19:04:13.123143 containerd[1951]: time="2025-02-13T19:04:13.123012838Z" level=info msg="shim disconnected" id=c71ef8ca289712deab225874d3daf5b3650d188825c22c39b09e89f1881b41c6 namespace=k8s.io Feb 13 19:04:13.123836 containerd[1951]: time="2025-02-13T19:04:13.123689914Z" level=warning msg="cleaning up after shim disconnected" id=c71ef8ca289712deab225874d3daf5b3650d188825c22c39b09e89f1881b41c6 namespace=k8s.io Feb 13 19:04:13.123836 containerd[1951]: time="2025-02-13T19:04:13.123719254Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:13.144697 containerd[1951]: time="2025-02-13T19:04:13.144636826Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:04:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:04:13.379708 kubelet[3494]: I0213 19:04:13.379549 3494 scope.go:117] "RemoveContainer" containerID="c71ef8ca289712deab225874d3daf5b3650d188825c22c39b09e89f1881b41c6" Feb 13 19:04:13.385148 containerd[1951]: time="2025-02-13T19:04:13.384785327Z" level=info msg="CreateContainer within sandbox \"8dd574c8b14af2c6fa0d115ddf2f24f85e02c71a943dfad8be5a0955386c7d63\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 19:04:13.411563 containerd[1951]: time="2025-02-13T19:04:13.411482183Z" level=info msg="CreateContainer within sandbox \"8dd574c8b14af2c6fa0d115ddf2f24f85e02c71a943dfad8be5a0955386c7d63\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"13be1562a91bc0a2b18379093ee621f95497f24c124271eb329d0ec4acfd4c90\"" Feb 13 19:04:13.412154 containerd[1951]: time="2025-02-13T19:04:13.412111247Z" level=info msg="StartContainer for \"13be1562a91bc0a2b18379093ee621f95497f24c124271eb329d0ec4acfd4c90\"" Feb 13 19:04:13.465813 systemd[1]: Started cri-containerd-13be1562a91bc0a2b18379093ee621f95497f24c124271eb329d0ec4acfd4c90.scope - libcontainer container 13be1562a91bc0a2b18379093ee621f95497f24c124271eb329d0ec4acfd4c90. Feb 13 19:04:13.536211 containerd[1951]: time="2025-02-13T19:04:13.536077440Z" level=info msg="StartContainer for \"13be1562a91bc0a2b18379093ee621f95497f24c124271eb329d0ec4acfd4c90\" returns successfully" Feb 13 19:04:16.523387 kubelet[3494]: E0213 19:04:16.523325 3494 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-65?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:04:17.855399 systemd[1]: cri-containerd-44c63bede9b79415428eb350fbce68bff53d4d274fef9555a735c1ac28ae88bb.scope: Deactivated successfully. Feb 13 19:04:17.856496 systemd[1]: cri-containerd-44c63bede9b79415428eb350fbce68bff53d4d274fef9555a735c1ac28ae88bb.scope: Consumed 1.949s CPU time, 21.8M memory peak. Feb 13 19:04:17.896875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44c63bede9b79415428eb350fbce68bff53d4d274fef9555a735c1ac28ae88bb-rootfs.mount: Deactivated successfully. Feb 13 19:04:17.921504 containerd[1951]: time="2025-02-13T19:04:17.921369582Z" level=info msg="shim disconnected" id=44c63bede9b79415428eb350fbce68bff53d4d274fef9555a735c1ac28ae88bb namespace=k8s.io Feb 13 19:04:17.921504 containerd[1951]: time="2025-02-13T19:04:17.921443754Z" level=warning msg="cleaning up after shim disconnected" id=44c63bede9b79415428eb350fbce68bff53d4d274fef9555a735c1ac28ae88bb namespace=k8s.io Feb 13 19:04:17.921504 containerd[1951]: time="2025-02-13T19:04:17.921462714Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:18.400564 kubelet[3494]: I0213 19:04:18.400510 3494 scope.go:117] "RemoveContainer" containerID="44c63bede9b79415428eb350fbce68bff53d4d274fef9555a735c1ac28ae88bb" Feb 13 19:04:18.404523 containerd[1951]: time="2025-02-13T19:04:18.404289436Z" level=info msg="CreateContainer within sandbox \"eb9c8b11e78a4d2f8fa50d9c4dab48bddec42fab840fd50d73ad31adf6b60936\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 19:04:18.435533 containerd[1951]: time="2025-02-13T19:04:18.435396112Z" level=info msg="CreateContainer within sandbox \"eb9c8b11e78a4d2f8fa50d9c4dab48bddec42fab840fd50d73ad31adf6b60936\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"392a248747a93e9955f97d1e2afa57eca127e23459af7c3d62fb2c2708785a3d\"" Feb 13 19:04:18.436424 containerd[1951]: time="2025-02-13T19:04:18.436034344Z" level=info msg="StartContainer for \"392a248747a93e9955f97d1e2afa57eca127e23459af7c3d62fb2c2708785a3d\"" Feb 13 19:04:18.494625 systemd[1]: Started cri-containerd-392a248747a93e9955f97d1e2afa57eca127e23459af7c3d62fb2c2708785a3d.scope - libcontainer container 392a248747a93e9955f97d1e2afa57eca127e23459af7c3d62fb2c2708785a3d. Feb 13 19:04:18.559786 containerd[1951]: time="2025-02-13T19:04:18.559714145Z" level=info msg="StartContainer for \"392a248747a93e9955f97d1e2afa57eca127e23459af7c3d62fb2c2708785a3d\" returns successfully" Feb 13 19:04:18.896683 systemd[1]: run-containerd-runc-k8s.io-392a248747a93e9955f97d1e2afa57eca127e23459af7c3d62fb2c2708785a3d-runc.JRWIUr.mount: Deactivated successfully.