Feb 13 15:19:49.260890 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 15:19:49.261005 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 14:02:42 -00 2025 Feb 13 15:19:49.261045 kernel: KASLR disabled due to lack of seed Feb 13 15:19:49.261064 kernel: efi: EFI v2.7 by EDK II Feb 13 15:19:49.261081 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Feb 13 15:19:49.261099 kernel: secureboot: Secure boot disabled Feb 13 15:19:49.261118 kernel: ACPI: Early table checksum verification disabled Feb 13 15:19:49.261135 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 15:19:49.261195 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 15:19:49.261224 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 15:19:49.261252 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 15:19:49.261270 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 15:19:49.261287 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 15:19:49.261305 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 15:19:49.261325 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 15:19:49.261351 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 15:19:49.261370 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 15:19:49.261388 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 15:19:49.261406 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 15:19:49.261425 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 15:19:49.261444 kernel: printk: bootconsole [uart0] enabled Feb 13 15:19:49.261462 kernel: NUMA: Failed to initialise from firmware Feb 13 15:19:49.261481 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 15:19:49.261500 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 15:19:49.261518 kernel: Zone ranges: Feb 13 15:19:49.261538 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 15:19:49.261565 kernel: DMA32 empty Feb 13 15:19:49.261585 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 15:19:49.261603 kernel: Movable zone start for each node Feb 13 15:19:49.261621 kernel: Early memory node ranges Feb 13 15:19:49.261640 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 15:19:49.261678 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 15:19:49.261702 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 15:19:49.261721 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 15:19:49.261740 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 15:19:49.261758 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 15:19:49.261777 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 15:19:49.261794 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 15:19:49.261818 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 15:19:49.261836 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 15:19:49.261860 kernel: psci: probing for conduit method from ACPI. Feb 13 15:19:49.261877 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 15:19:49.261895 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:19:49.261916 kernel: psci: Trusted OS migration not required Feb 13 15:19:49.261934 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:19:49.261952 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:19:49.261970 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:19:49.261990 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:19:49.262008 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:19:49.262027 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:19:49.262045 kernel: CPU features: detected: Spectre-v2 Feb 13 15:19:49.262080 kernel: CPU features: detected: Spectre-v3a Feb 13 15:19:49.262110 kernel: CPU features: detected: Spectre-BHB Feb 13 15:19:49.262130 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 15:19:49.262149 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 15:19:49.265144 kernel: alternatives: applying boot alternatives Feb 13 15:19:49.265228 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:19:49.265271 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:19:49.265295 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:19:49.265315 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:19:49.265333 kernel: Fallback order for Node 0: 0 Feb 13 15:19:49.265352 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 15:19:49.265372 kernel: Policy zone: Normal Feb 13 15:19:49.265392 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:19:49.265410 kernel: software IO TLB: area num 2. Feb 13 15:19:49.265443 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 15:19:49.265463 kernel: Memory: 3819640K/4030464K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 210824K reserved, 0K cma-reserved) Feb 13 15:19:49.265483 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:19:49.265502 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:19:49.265522 kernel: rcu: RCU event tracing is enabled. Feb 13 15:19:49.265541 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:19:49.265560 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:19:49.265579 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:19:49.265598 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:19:49.265617 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:19:49.265636 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:19:49.265689 kernel: GICv3: 96 SPIs implemented Feb 13 15:19:49.265712 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:19:49.265733 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:19:49.265753 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 15:19:49.265774 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 15:19:49.265792 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 15:19:49.265813 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:19:49.265832 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:19:49.265852 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 15:19:49.265884 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 15:19:49.265914 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 15:19:49.265934 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:19:49.265967 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 15:19:49.265988 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 15:19:49.266009 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 15:19:49.266031 kernel: Console: colour dummy device 80x25 Feb 13 15:19:49.266051 kernel: printk: console [tty1] enabled Feb 13 15:19:49.266073 kernel: ACPI: Core revision 20230628 Feb 13 15:19:49.266094 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 15:19:49.266116 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:19:49.266137 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:19:49.266341 kernel: landlock: Up and running. Feb 13 15:19:49.266405 kernel: SELinux: Initializing. Feb 13 15:19:49.266429 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:19:49.266453 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:19:49.266475 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:19:49.266497 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:19:49.266518 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:19:49.266538 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:19:49.266557 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 15:19:49.266591 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 15:19:49.266611 kernel: Remapping and enabling EFI services. Feb 13 15:19:49.266630 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:19:49.266648 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:19:49.266667 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 15:19:49.266686 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 15:19:49.266705 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 15:19:49.266723 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:19:49.266741 kernel: SMP: Total of 2 processors activated. Feb 13 15:19:49.266759 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:19:49.266790 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 15:19:49.266810 kernel: CPU features: detected: CRC32 instructions Feb 13 15:19:49.266844 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:19:49.266871 kernel: alternatives: applying system-wide alternatives Feb 13 15:19:49.266891 kernel: devtmpfs: initialized Feb 13 15:19:49.266912 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:19:49.266933 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:19:49.266953 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:19:49.266974 kernel: SMBIOS 3.0.0 present. Feb 13 15:19:49.267006 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 15:19:49.267028 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:19:49.267049 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:19:49.267068 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:19:49.267089 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:19:49.267109 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:19:49.267130 kernel: audit: type=2000 audit(0.258:1): state=initialized audit_enabled=0 res=1 Feb 13 15:19:49.274898 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:19:49.275049 kernel: cpuidle: using governor menu Feb 13 15:19:49.275226 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:19:49.275252 kernel: ASID allocator initialised with 65536 entries Feb 13 15:19:49.275272 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:19:49.275292 kernel: Serial: AMBA PL011 UART driver Feb 13 15:19:49.275312 kernel: Modules: 17360 pages in range for non-PLT usage Feb 13 15:19:49.275332 kernel: Modules: 508880 pages in range for PLT usage Feb 13 15:19:49.275352 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:19:49.275387 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:19:49.275408 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:19:49.275427 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:19:49.275447 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:19:49.275466 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:19:49.275485 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:19:49.275504 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:19:49.275524 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:19:49.275544 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:19:49.275573 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:19:49.275594 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:19:49.275613 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:19:49.275632 kernel: ACPI: Interpreter enabled Feb 13 15:19:49.275652 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:19:49.275672 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:19:49.275693 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 15:19:49.276221 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:19:49.276549 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:19:49.276785 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:19:49.276985 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 15:19:49.280460 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 15:19:49.280511 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 15:19:49.280531 kernel: acpiphp: Slot [1] registered Feb 13 15:19:49.280550 kernel: acpiphp: Slot [2] registered Feb 13 15:19:49.280570 kernel: acpiphp: Slot [3] registered Feb 13 15:19:49.280599 kernel: acpiphp: Slot [4] registered Feb 13 15:19:49.280619 kernel: acpiphp: Slot [5] registered Feb 13 15:19:49.280638 kernel: acpiphp: Slot [6] registered Feb 13 15:19:49.280657 kernel: acpiphp: Slot [7] registered Feb 13 15:19:49.280676 kernel: acpiphp: Slot [8] registered Feb 13 15:19:49.280695 kernel: acpiphp: Slot [9] registered Feb 13 15:19:49.280714 kernel: acpiphp: Slot [10] registered Feb 13 15:19:49.280733 kernel: acpiphp: Slot [11] registered Feb 13 15:19:49.280751 kernel: acpiphp: Slot [12] registered Feb 13 15:19:49.280770 kernel: acpiphp: Slot [13] registered Feb 13 15:19:49.280797 kernel: acpiphp: Slot [14] registered Feb 13 15:19:49.280816 kernel: acpiphp: Slot [15] registered Feb 13 15:19:49.280836 kernel: acpiphp: Slot [16] registered Feb 13 15:19:49.280854 kernel: acpiphp: Slot [17] registered Feb 13 15:19:49.280892 kernel: acpiphp: Slot [18] registered Feb 13 15:19:49.280920 kernel: acpiphp: Slot [19] registered Feb 13 15:19:49.280939 kernel: acpiphp: Slot [20] registered Feb 13 15:19:49.280958 kernel: acpiphp: Slot [21] registered Feb 13 15:19:49.280978 kernel: acpiphp: Slot [22] registered Feb 13 15:19:49.281005 kernel: acpiphp: Slot [23] registered Feb 13 15:19:49.281025 kernel: acpiphp: Slot [24] registered Feb 13 15:19:49.281044 kernel: acpiphp: Slot [25] registered Feb 13 15:19:49.281062 kernel: acpiphp: Slot [26] registered Feb 13 15:19:49.281081 kernel: acpiphp: Slot [27] registered Feb 13 15:19:49.281100 kernel: acpiphp: Slot [28] registered Feb 13 15:19:49.281119 kernel: acpiphp: Slot [29] registered Feb 13 15:19:49.281138 kernel: acpiphp: Slot [30] registered Feb 13 15:19:49.281189 kernel: acpiphp: Slot [31] registered Feb 13 15:19:49.281214 kernel: PCI host bridge to bus 0000:00 Feb 13 15:19:49.281586 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 15:19:49.281886 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:19:49.282313 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 15:19:49.282565 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 15:19:49.282821 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 15:19:49.283072 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 15:19:49.283572 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 15:19:49.283923 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 15:19:49.284247 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 15:19:49.284543 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 15:19:49.284793 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 15:19:49.285007 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 15:19:49.285257 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 15:19:49.285488 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 15:19:49.285774 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 15:19:49.286057 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 15:19:49.288676 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 15:19:49.288915 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 15:19:49.289122 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 15:19:49.289369 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 15:19:49.289581 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 15:19:49.289852 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:19:49.290123 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 15:19:49.290741 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:19:49.290780 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:19:49.290800 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:19:49.290820 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:19:49.290839 kernel: iommu: Default domain type: Translated Feb 13 15:19:49.290869 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:19:49.290888 kernel: efivars: Registered efivars operations Feb 13 15:19:49.290907 kernel: vgaarb: loaded Feb 13 15:19:49.290926 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:19:49.290945 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:19:49.290963 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:19:49.290982 kernel: pnp: PnP ACPI init Feb 13 15:19:49.291371 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 15:19:49.291426 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:19:49.291449 kernel: NET: Registered PF_INET protocol family Feb 13 15:19:49.291468 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:19:49.291488 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:19:49.291507 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:19:49.291526 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:19:49.291544 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:19:49.291563 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:19:49.291582 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:19:49.291607 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:19:49.291627 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:19:49.291646 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:19:49.291664 kernel: kvm [1]: HYP mode not available Feb 13 15:19:49.291685 kernel: Initialise system trusted keyrings Feb 13 15:19:49.291704 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:19:49.291723 kernel: Key type asymmetric registered Feb 13 15:19:49.291742 kernel: Asymmetric key parser 'x509' registered Feb 13 15:19:49.291762 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:19:49.291789 kernel: io scheduler mq-deadline registered Feb 13 15:19:49.291809 kernel: io scheduler kyber registered Feb 13 15:19:49.291830 kernel: io scheduler bfq registered Feb 13 15:19:49.292150 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 15:19:49.292240 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:19:49.292262 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:19:49.292282 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 15:19:49.292301 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 15:19:49.292331 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:19:49.292353 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 15:19:49.292701 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 15:19:49.292745 kernel: printk: console [ttyS0] disabled Feb 13 15:19:49.292769 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 15:19:49.292792 kernel: printk: console [ttyS0] enabled Feb 13 15:19:49.292811 kernel: printk: bootconsole [uart0] disabled Feb 13 15:19:49.292829 kernel: thunder_xcv, ver 1.0 Feb 13 15:19:49.292849 kernel: thunder_bgx, ver 1.0 Feb 13 15:19:49.292868 kernel: nicpf, ver 1.0 Feb 13 15:19:49.292897 kernel: nicvf, ver 1.0 Feb 13 15:19:49.295240 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:19:49.295586 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:19:48 UTC (1739459988) Feb 13 15:19:49.295627 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:19:49.295647 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 15:19:49.295666 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:19:49.295686 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:19:49.295716 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:19:49.295736 kernel: Segment Routing with IPv6 Feb 13 15:19:49.295754 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:19:49.295773 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:19:49.295792 kernel: Key type dns_resolver registered Feb 13 15:19:49.295811 kernel: registered taskstats version 1 Feb 13 15:19:49.295830 kernel: Loading compiled-in X.509 certificates Feb 13 15:19:49.295849 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 62d673f884efd54b6d6ef802a9b879413c8a346e' Feb 13 15:19:49.295868 kernel: Key type .fscrypt registered Feb 13 15:19:49.295887 kernel: Key type fscrypt-provisioning registered Feb 13 15:19:49.295916 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:19:49.295937 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:19:49.295957 kernel: ima: No architecture policies found Feb 13 15:19:49.295978 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:19:49.295999 kernel: clk: Disabling unused clocks Feb 13 15:19:49.296018 kernel: Freeing unused kernel memory: 39936K Feb 13 15:19:49.296038 kernel: Run /init as init process Feb 13 15:19:49.296057 kernel: with arguments: Feb 13 15:19:49.296077 kernel: /init Feb 13 15:19:49.296104 kernel: with environment: Feb 13 15:19:49.296125 kernel: HOME=/ Feb 13 15:19:49.296145 kernel: TERM=linux Feb 13 15:19:49.296293 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:19:49.296327 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:19:49.296355 systemd[1]: Detected virtualization amazon. Feb 13 15:19:49.296378 systemd[1]: Detected architecture arm64. Feb 13 15:19:49.296412 systemd[1]: Running in initrd. Feb 13 15:19:49.296435 systemd[1]: No hostname configured, using default hostname. Feb 13 15:19:49.296456 systemd[1]: Hostname set to . Feb 13 15:19:49.296478 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:19:49.296501 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:19:49.296524 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:19:49.296548 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:19:49.296573 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:19:49.296609 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:19:49.296634 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:19:49.296657 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:19:49.296683 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:19:49.296707 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:19:49.296731 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:19:49.296755 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:19:49.296788 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:19:49.296811 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:19:49.296831 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:19:49.296851 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:19:49.296871 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:19:49.296891 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:19:49.296912 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:19:49.296933 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:19:49.296954 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:19:49.296982 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:19:49.297002 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:19:49.297023 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:19:49.297043 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:19:49.297064 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:19:49.297084 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:19:49.297104 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:19:49.297124 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:19:49.297149 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:19:49.297809 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:19:49.297833 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:19:49.297855 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:19:49.297877 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:19:49.297901 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:19:49.297937 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:19:49.298026 systemd-journald[251]: Collecting audit messages is disabled. Feb 13 15:19:49.298081 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:19:49.298113 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:19:49.298134 kernel: Bridge firewalling registered Feb 13 15:19:49.298274 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:19:49.298316 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:19:49.298341 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:19:49.298365 systemd-journald[251]: Journal started Feb 13 15:19:49.298431 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2f690b86d27640be0f8b96eae9f0df) is 8.0M, max 75.3M, 67.3M free. Feb 13 15:19:49.226733 systemd-modules-load[252]: Inserted module 'overlay' Feb 13 15:19:49.262054 systemd-modules-load[252]: Inserted module 'br_netfilter' Feb 13 15:19:49.315266 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:19:49.321498 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:19:49.324194 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:19:49.344320 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:19:49.357625 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:19:49.374214 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:19:49.386758 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:19:49.401185 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:19:49.415603 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:19:49.438178 dracut-cmdline[286]: dracut-dracut-053 Feb 13 15:19:49.449584 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:19:49.501390 systemd-resolved[289]: Positive Trust Anchors: Feb 13 15:19:49.501443 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:19:49.501507 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:19:49.617180 kernel: SCSI subsystem initialized Feb 13 15:19:49.624193 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:19:49.636198 kernel: iscsi: registered transport (tcp) Feb 13 15:19:49.658391 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:19:49.658464 kernel: QLogic iSCSI HBA Driver Feb 13 15:19:49.736204 kernel: random: crng init done Feb 13 15:19:49.736463 systemd-resolved[289]: Defaulting to hostname 'linux'. Feb 13 15:19:49.740069 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:19:49.743059 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:19:49.774133 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:19:49.788562 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:19:49.826804 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:19:49.828298 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:19:49.828330 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:19:49.901238 kernel: raid6: neonx8 gen() 6461 MB/s Feb 13 15:19:49.918214 kernel: raid6: neonx4 gen() 6396 MB/s Feb 13 15:19:49.935204 kernel: raid6: neonx2 gen() 5356 MB/s Feb 13 15:19:49.952213 kernel: raid6: neonx1 gen() 3909 MB/s Feb 13 15:19:49.969205 kernel: raid6: int64x8 gen() 3590 MB/s Feb 13 15:19:49.986204 kernel: raid6: int64x4 gen() 3679 MB/s Feb 13 15:19:50.003213 kernel: raid6: int64x2 gen() 3577 MB/s Feb 13 15:19:50.021017 kernel: raid6: int64x1 gen() 2740 MB/s Feb 13 15:19:50.021084 kernel: raid6: using algorithm neonx8 gen() 6461 MB/s Feb 13 15:19:50.039080 kernel: raid6: .... xor() 4704 MB/s, rmw enabled Feb 13 15:19:50.039220 kernel: raid6: using neon recovery algorithm Feb 13 15:19:50.047818 kernel: xor: measuring software checksum speed Feb 13 15:19:50.047909 kernel: 8regs : 12589 MB/sec Feb 13 15:19:50.048942 kernel: 32regs : 12410 MB/sec Feb 13 15:19:50.051098 kernel: arm64_neon : 8953 MB/sec Feb 13 15:19:50.051146 kernel: xor: using function: 8regs (12589 MB/sec) Feb 13 15:19:50.145230 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:19:50.169642 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:19:50.189505 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:19:50.222960 systemd-udevd[471]: Using default interface naming scheme 'v255'. Feb 13 15:19:50.232623 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:19:50.255911 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:19:50.287469 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Feb 13 15:19:50.342573 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:19:50.354465 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:19:50.480290 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:19:50.506638 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:19:50.560824 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:19:50.569365 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:19:50.574269 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:19:50.574470 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:19:50.601490 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:19:50.653512 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:19:50.691344 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:19:50.691414 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 15:19:50.715503 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 15:19:50.716478 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 15:19:50.716712 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:46:88:95:99:23 Feb 13 15:19:50.715285 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:19:50.715404 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:19:50.718246 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:19:50.751310 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 15:19:50.751348 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 15:19:50.720401 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:19:50.720592 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:19:50.722405 (udev-worker)[523]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:19:50.723457 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:19:50.759439 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:19:50.772196 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 15:19:50.784200 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:19:50.784269 kernel: GPT:9289727 != 16777215 Feb 13 15:19:50.784295 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:19:50.786887 kernel: GPT:9289727 != 16777215 Feb 13 15:19:50.786971 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:19:50.787959 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:19:50.799852 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:19:50.811456 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:19:50.856899 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:19:50.934686 kernel: BTRFS: device fsid dbbe73f5-49db-4e16-b023-d47ce63b488f devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (516) Feb 13 15:19:50.934757 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (530) Feb 13 15:19:50.993707 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 15:19:51.018467 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 15:19:51.065336 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 15:19:51.068753 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 15:19:51.102576 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:19:51.120531 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:19:51.135109 disk-uuid[662]: Primary Header is updated. Feb 13 15:19:51.135109 disk-uuid[662]: Secondary Entries is updated. Feb 13 15:19:51.135109 disk-uuid[662]: Secondary Header is updated. Feb 13 15:19:51.147223 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:19:51.160394 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:19:52.169082 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:19:52.169854 disk-uuid[663]: The operation has completed successfully. Feb 13 15:19:52.382023 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:19:52.382281 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:19:52.432474 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:19:52.450388 sh[921]: Success Feb 13 15:19:52.478206 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:19:52.608468 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:19:52.620402 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:19:52.630301 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:19:52.666026 kernel: BTRFS info (device dm-0): first mount of filesystem dbbe73f5-49db-4e16-b023-d47ce63b488f Feb 13 15:19:52.666092 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:19:52.666118 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:19:52.667840 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:19:52.669120 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:19:52.788220 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:19:52.801806 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:19:52.805230 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:19:52.825569 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:19:52.833620 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:19:52.865330 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:19:52.865425 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:19:52.865461 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:19:52.873210 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:19:52.894658 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:19:52.897370 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:19:52.910262 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:19:52.932584 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:19:53.041679 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:19:53.066455 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:19:53.129863 systemd-networkd[1114]: lo: Link UP Feb 13 15:19:53.129890 systemd-networkd[1114]: lo: Gained carrier Feb 13 15:19:53.133289 systemd-networkd[1114]: Enumeration completed Feb 13 15:19:53.133539 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:19:53.134148 systemd-networkd[1114]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:19:53.134242 systemd-networkd[1114]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:19:53.146621 systemd[1]: Reached target network.target - Network. Feb 13 15:19:53.153001 systemd-networkd[1114]: eth0: Link UP Feb 13 15:19:53.153009 systemd-networkd[1114]: eth0: Gained carrier Feb 13 15:19:53.153028 systemd-networkd[1114]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:19:53.175309 systemd-networkd[1114]: eth0: DHCPv4 address 172.31.28.163/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:19:53.307611 ignition[1034]: Ignition 2.20.0 Feb 13 15:19:53.307642 ignition[1034]: Stage: fetch-offline Feb 13 15:19:53.309187 ignition[1034]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:19:53.309237 ignition[1034]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:19:53.312507 ignition[1034]: Ignition finished successfully Feb 13 15:19:53.318147 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:19:53.335550 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:19:53.361245 ignition[1124]: Ignition 2.20.0 Feb 13 15:19:53.361267 ignition[1124]: Stage: fetch Feb 13 15:19:53.361887 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:19:53.361912 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:19:53.362090 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:19:53.377981 ignition[1124]: PUT result: OK Feb 13 15:19:53.381056 ignition[1124]: parsed url from cmdline: "" Feb 13 15:19:53.381078 ignition[1124]: no config URL provided Feb 13 15:19:53.381097 ignition[1124]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:19:53.381142 ignition[1124]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:19:53.381227 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:19:53.384887 ignition[1124]: PUT result: OK Feb 13 15:19:53.386243 ignition[1124]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 15:19:53.390264 ignition[1124]: GET result: OK Feb 13 15:19:53.400635 unknown[1124]: fetched base config from "system" Feb 13 15:19:53.390435 ignition[1124]: parsing config with SHA512: 331feee4e80b6f9aef4b7abe0f9f4fe5e86d6380bd065d0950c50e5c7734f36bc4456145bff68a2fb367f2296982be8567c167ba1227730caaa00f71d7c1c079 Feb 13 15:19:53.400652 unknown[1124]: fetched base config from "system" Feb 13 15:19:53.401386 ignition[1124]: fetch: fetch complete Feb 13 15:19:53.400678 unknown[1124]: fetched user config from "aws" Feb 13 15:19:53.401400 ignition[1124]: fetch: fetch passed Feb 13 15:19:53.407738 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:19:53.401505 ignition[1124]: Ignition finished successfully Feb 13 15:19:53.425520 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:19:53.448654 ignition[1131]: Ignition 2.20.0 Feb 13 15:19:53.448683 ignition[1131]: Stage: kargs Feb 13 15:19:53.449626 ignition[1131]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:19:53.449852 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:19:53.450357 ignition[1131]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:19:53.453637 ignition[1131]: PUT result: OK Feb 13 15:19:53.461485 ignition[1131]: kargs: kargs passed Feb 13 15:19:53.461597 ignition[1131]: Ignition finished successfully Feb 13 15:19:53.477370 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:19:53.495080 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:19:53.517258 ignition[1137]: Ignition 2.20.0 Feb 13 15:19:53.517281 ignition[1137]: Stage: disks Feb 13 15:19:53.517878 ignition[1137]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:19:53.517902 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:19:53.518076 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:19:53.522590 ignition[1137]: PUT result: OK Feb 13 15:19:53.529095 ignition[1137]: disks: disks passed Feb 13 15:19:53.529573 ignition[1137]: Ignition finished successfully Feb 13 15:19:53.537482 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:19:53.542281 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:19:53.546480 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:19:53.550955 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:19:53.557270 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:19:53.559449 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:19:53.570520 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:19:53.628844 systemd-fsck[1145]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:19:53.634846 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:19:53.645362 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:19:53.740467 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 469d244b-00c1-45f4-bce0-c1d88e98a895 r/w with ordered data mode. Quota mode: none. Feb 13 15:19:53.741623 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:19:53.745712 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:19:53.766354 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:19:53.772832 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:19:53.776740 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:19:53.776842 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:19:53.776899 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:19:53.804547 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1164) Feb 13 15:19:53.811802 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:19:53.811896 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:19:53.811924 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:19:53.816048 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:19:53.835263 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:19:53.836625 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:19:53.844105 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:19:54.218134 initrd-setup-root[1188]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:19:54.238476 initrd-setup-root[1195]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:19:54.254650 initrd-setup-root[1202]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:19:54.264267 initrd-setup-root[1209]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:19:54.604580 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:19:54.622514 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:19:54.629575 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:19:54.644043 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:19:54.647682 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:19:54.694575 ignition[1276]: INFO : Ignition 2.20.0 Feb 13 15:19:54.694575 ignition[1276]: INFO : Stage: mount Feb 13 15:19:54.699401 ignition[1276]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:19:54.699401 ignition[1276]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:19:54.699401 ignition[1276]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:19:54.706506 ignition[1276]: INFO : PUT result: OK Feb 13 15:19:54.710093 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:19:54.716854 ignition[1276]: INFO : mount: mount passed Feb 13 15:19:54.718601 ignition[1276]: INFO : Ignition finished successfully Feb 13 15:19:54.722388 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:19:54.733343 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:19:54.761626 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:19:54.794227 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1289) Feb 13 15:19:54.798305 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:19:54.798383 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:19:54.798411 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:19:54.805201 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:19:54.809033 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:19:54.817456 systemd-networkd[1114]: eth0: Gained IPv6LL Feb 13 15:19:54.862073 ignition[1306]: INFO : Ignition 2.20.0 Feb 13 15:19:54.862073 ignition[1306]: INFO : Stage: files Feb 13 15:19:54.865591 ignition[1306]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:19:54.865591 ignition[1306]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:19:54.865591 ignition[1306]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:19:54.872359 ignition[1306]: INFO : PUT result: OK Feb 13 15:19:54.877672 ignition[1306]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:19:54.890700 ignition[1306]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:19:54.890700 ignition[1306]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:19:54.913922 ignition[1306]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:19:54.916841 ignition[1306]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:19:54.922071 unknown[1306]: wrote ssh authorized keys file for user: core Feb 13 15:19:54.924781 ignition[1306]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:19:54.936527 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:19:54.936527 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:19:55.041850 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:19:55.206454 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:19:55.206454 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:19:55.213446 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 15:19:55.663397 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:19:55.803215 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:19:55.803215 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:19:55.810808 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:19:55.810808 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:19:55.810808 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:19:55.810808 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:19:55.810808 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:19:55.810808 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:19:55.810808 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:19:55.810808 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:19:55.810808 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:19:55.810808 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:19:55.810808 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:19:55.810808 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:19:55.810808 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 15:19:56.222478 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:19:56.624534 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:19:56.624534 ignition[1306]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:19:56.632583 ignition[1306]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:19:56.632583 ignition[1306]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:19:56.632583 ignition[1306]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:19:56.632583 ignition[1306]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:19:56.632583 ignition[1306]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:19:56.632583 ignition[1306]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:19:56.632583 ignition[1306]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:19:56.632583 ignition[1306]: INFO : files: files passed Feb 13 15:19:56.632583 ignition[1306]: INFO : Ignition finished successfully Feb 13 15:19:56.658918 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:19:56.671496 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:19:56.683244 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:19:56.697733 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:19:56.698365 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:19:56.734038 initrd-setup-root-after-ignition[1335]: grep: Feb 13 15:19:56.736470 initrd-setup-root-after-ignition[1339]: grep: Feb 13 15:19:56.736470 initrd-setup-root-after-ignition[1335]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:19:56.736470 initrd-setup-root-after-ignition[1335]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:19:56.744777 initrd-setup-root-after-ignition[1339]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:19:56.747728 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:19:56.755527 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:19:56.773234 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:19:56.827783 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:19:56.828464 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:19:56.832631 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:19:56.834741 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:19:56.839124 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:19:56.842420 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:19:56.883510 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:19:56.897680 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:19:56.920912 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:19:56.925447 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:19:56.930150 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:19:56.933813 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:19:56.934120 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:19:56.940997 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:19:56.943476 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:19:56.948853 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:19:56.951726 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:19:56.958520 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:19:56.960884 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:19:56.963424 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:19:56.967282 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:19:56.977011 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:19:56.979487 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:19:56.984174 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:19:56.984422 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:19:56.988390 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:19:56.992453 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:19:56.994811 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:19:56.998853 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:19:57.006258 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:19:57.006490 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:19:57.009130 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:19:57.009374 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:19:57.012076 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:19:57.012371 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:19:57.037362 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:19:57.044573 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:19:57.044980 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:19:57.048540 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:19:57.056177 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:19:57.058448 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:19:57.078489 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:19:57.081046 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:19:57.090113 ignition[1359]: INFO : Ignition 2.20.0 Feb 13 15:19:57.090113 ignition[1359]: INFO : Stage: umount Feb 13 15:19:57.095744 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:19:57.095744 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:19:57.095744 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:19:57.095744 ignition[1359]: INFO : PUT result: OK Feb 13 15:19:57.108145 ignition[1359]: INFO : umount: umount passed Feb 13 15:19:57.108145 ignition[1359]: INFO : Ignition finished successfully Feb 13 15:19:57.116328 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:19:57.116607 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:19:57.122851 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:19:57.122969 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:19:57.125179 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:19:57.125300 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:19:57.129525 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:19:57.129633 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:19:57.133502 systemd[1]: Stopped target network.target - Network. Feb 13 15:19:57.136112 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:19:57.136369 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:19:57.141533 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:19:57.145321 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:19:57.147239 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:19:57.150131 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:19:57.157123 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:19:57.160829 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:19:57.160920 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:19:57.164695 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:19:57.164797 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:19:57.167307 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:19:57.167408 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:19:57.170458 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:19:57.170588 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:19:57.173351 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:19:57.175532 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:19:57.178341 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:19:57.179558 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:19:57.179754 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:19:57.180768 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:19:57.180988 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:19:57.197385 systemd-networkd[1114]: eth0: DHCPv6 lease lost Feb 13 15:19:57.202533 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:19:57.204271 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:19:57.208084 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:19:57.209715 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:19:57.217418 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:19:57.217564 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:19:57.231035 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:19:57.237038 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:19:57.237177 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:19:57.240721 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:19:57.240807 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:19:57.281298 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:19:57.281402 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:19:57.283446 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:19:57.283533 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:19:57.286071 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:19:57.317007 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:19:57.318901 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:19:57.325747 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:19:57.326555 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:19:57.332846 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:19:57.332948 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:19:57.335134 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:19:57.335449 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:19:57.345215 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:19:57.345314 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:19:57.347804 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:19:57.347890 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:19:57.352951 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:19:57.356755 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:19:57.373128 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:19:57.376340 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:19:57.376478 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:19:57.381367 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:19:57.381478 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:19:57.388458 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:19:57.388593 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:19:57.394608 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:19:57.394719 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:19:57.399687 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:19:57.399933 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:19:57.404495 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:19:57.409566 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:19:57.466379 systemd[1]: Switching root. Feb 13 15:19:57.512798 systemd-journald[251]: Journal stopped Feb 13 15:20:00.168312 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Feb 13 15:20:00.168473 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:20:00.168541 kernel: SELinux: policy capability open_perms=1 Feb 13 15:20:00.168576 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:20:00.168607 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:20:00.168638 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:20:00.168678 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:20:00.168709 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:20:00.168738 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:20:00.168777 kernel: audit: type=1403 audit(1739459998.127:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:20:00.168812 systemd[1]: Successfully loaded SELinux policy in 49.007ms. Feb 13 15:20:00.168850 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.089ms. Feb 13 15:20:00.168885 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:20:00.168917 systemd[1]: Detected virtualization amazon. Feb 13 15:20:00.168950 systemd[1]: Detected architecture arm64. Feb 13 15:20:00.168980 systemd[1]: Detected first boot. Feb 13 15:20:00.169011 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:20:00.169045 zram_generator::config[1402]: No configuration found. Feb 13 15:20:00.169081 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:20:00.169114 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:20:00.169146 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:20:00.169321 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:20:00.169355 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:20:00.169389 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:20:00.169423 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:20:00.169461 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:20:00.169499 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:20:00.169541 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:20:00.169574 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:20:00.169610 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:20:00.169663 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:20:00.169709 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:20:00.169741 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:20:00.169778 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:20:00.170109 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:20:00.170220 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:20:00.170277 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:20:00.170318 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:20:00.170351 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:20:00.170383 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:20:00.170415 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:20:00.170446 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:20:00.170475 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:20:00.170513 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:20:00.170546 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:20:00.170577 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:20:00.170606 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:20:00.170637 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:20:00.170668 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:20:00.170883 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:20:00.170924 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:20:00.170956 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:20:00.170998 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:20:00.171034 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:20:00.171065 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:20:00.171096 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:20:00.171125 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:20:00.171201 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:20:00.171236 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:20:00.171268 systemd[1]: Reached target machines.target - Containers. Feb 13 15:20:00.171300 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:20:00.171564 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:20:00.171616 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:20:00.171657 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:20:00.171690 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:20:00.171725 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:20:00.171757 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:20:00.171788 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:20:00.171822 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:20:00.171860 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:20:00.171900 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:20:00.171939 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:20:00.171974 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:20:00.172010 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:20:00.172047 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:20:00.172087 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:20:00.173567 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:20:00.173622 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:20:00.173693 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:20:00.173742 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:20:00.173774 systemd[1]: Stopped verity-setup.service. Feb 13 15:20:00.173808 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:20:00.173843 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:20:00.173884 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:20:00.173917 kernel: fuse: init (API version 7.39) Feb 13 15:20:00.173955 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:20:00.173990 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:20:00.174020 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:20:00.174050 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:20:00.174084 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:20:00.174115 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:20:00.174151 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:20:00.174226 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:20:00.174259 kernel: loop: module loaded Feb 13 15:20:00.174288 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:20:00.174317 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:20:00.174347 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:20:00.174378 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:20:00.174411 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:20:00.174441 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:20:00.174486 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:20:00.174516 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:20:00.174544 kernel: ACPI: bus type drm_connector registered Feb 13 15:20:00.174573 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:20:00.174654 systemd-journald[1480]: Collecting audit messages is disabled. Feb 13 15:20:00.174718 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:20:00.174749 systemd-journald[1480]: Journal started Feb 13 15:20:00.174805 systemd-journald[1480]: Runtime Journal (/run/log/journal/ec2f690b86d27640be0f8b96eae9f0df) is 8.0M, max 75.3M, 67.3M free. Feb 13 15:19:59.542765 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:19:59.597979 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 15:19:59.598891 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:20:00.202062 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:20:00.202139 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:20:00.214268 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:20:00.214393 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:20:00.229530 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:20:00.243198 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:20:00.243295 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:20:00.269211 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:20:00.269342 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:20:00.281326 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:20:00.289214 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:20:00.306791 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:20:00.317737 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:20:00.326209 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:20:00.341326 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:20:00.344797 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:20:00.346376 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:20:00.351298 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:20:00.354592 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:20:00.357630 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:20:00.360777 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:20:00.371386 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:20:00.431822 kernel: loop0: detected capacity change from 0 to 53784 Feb 13 15:20:00.454550 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:20:00.466560 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:20:00.484673 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:20:00.494505 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:20:00.533449 systemd-journald[1480]: Time spent on flushing to /var/log/journal/ec2f690b86d27640be0f8b96eae9f0df is 114.715ms for 918 entries. Feb 13 15:20:00.533449 systemd-journald[1480]: System Journal (/var/log/journal/ec2f690b86d27640be0f8b96eae9f0df) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:20:00.663036 systemd-journald[1480]: Received client request to flush runtime journal. Feb 13 15:20:00.663145 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:20:00.663247 kernel: loop1: detected capacity change from 0 to 113552 Feb 13 15:20:00.577501 systemd-tmpfiles[1513]: ACLs are not supported, ignoring. Feb 13 15:20:00.577533 systemd-tmpfiles[1513]: ACLs are not supported, ignoring. Feb 13 15:20:00.591092 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:20:00.607645 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:20:00.630005 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:20:00.650099 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:20:00.674059 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:20:00.724352 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:20:00.785933 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:20:00.799470 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:20:00.825564 kernel: loop2: detected capacity change from 0 to 116784 Feb 13 15:20:00.839244 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:20:00.847530 udevadm[1553]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:20:00.857523 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:20:00.890341 systemd-tmpfiles[1555]: ACLs are not supported, ignoring. Feb 13 15:20:00.890382 systemd-tmpfiles[1555]: ACLs are not supported, ignoring. Feb 13 15:20:00.903296 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:20:00.946048 kernel: loop3: detected capacity change from 0 to 194096 Feb 13 15:20:01.007233 kernel: loop4: detected capacity change from 0 to 53784 Feb 13 15:20:01.023220 kernel: loop5: detected capacity change from 0 to 113552 Feb 13 15:20:01.044219 kernel: loop6: detected capacity change from 0 to 116784 Feb 13 15:20:01.066197 kernel: loop7: detected capacity change from 0 to 194096 Feb 13 15:20:01.111316 (sd-merge)[1560]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 15:20:01.112387 (sd-merge)[1560]: Merged extensions into '/usr'. Feb 13 15:20:01.124996 systemd[1]: Reloading requested from client PID 1512 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:20:01.125026 systemd[1]: Reloading... Feb 13 15:20:01.279198 zram_generator::config[1582]: No configuration found. Feb 13 15:20:01.694230 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:20:01.817539 systemd[1]: Reloading finished in 691 ms. Feb 13 15:20:01.868316 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:20:01.872434 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:20:01.887522 systemd[1]: Starting ensure-sysext.service... Feb 13 15:20:01.893507 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:20:01.905603 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:20:01.928813 ldconfig[1509]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:20:01.931870 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:20:01.947447 systemd[1]: Reloading requested from client PID 1638 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:20:01.947482 systemd[1]: Reloading... Feb 13 15:20:01.989302 systemd-udevd[1640]: Using default interface naming scheme 'v255'. Feb 13 15:20:01.997793 systemd-tmpfiles[1639]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:20:01.998507 systemd-tmpfiles[1639]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:20:02.005059 systemd-tmpfiles[1639]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:20:02.007073 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Feb 13 15:20:02.007293 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Feb 13 15:20:02.022867 systemd-tmpfiles[1639]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:20:02.022902 systemd-tmpfiles[1639]: Skipping /boot Feb 13 15:20:02.092679 systemd-tmpfiles[1639]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:20:02.092720 systemd-tmpfiles[1639]: Skipping /boot Feb 13 15:20:02.208479 zram_generator::config[1687]: No configuration found. Feb 13 15:20:02.301506 (udev-worker)[1660]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:20:02.558235 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1685) Feb 13 15:20:02.606122 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:20:02.757715 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:20:02.759189 systemd[1]: Reloading finished in 811 ms. Feb 13 15:20:02.796416 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:20:02.800111 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:20:02.874866 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:20:02.884705 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:20:02.903925 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:20:02.911706 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:20:02.919671 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:20:02.927690 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:20:02.935544 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:20:03.008031 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:20:03.020518 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:20:03.032639 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:20:03.042722 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:20:03.045483 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:20:03.052679 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:20:03.068860 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:20:03.075818 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:20:03.078297 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:20:03.082509 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:20:03.084302 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:20:03.096965 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:20:03.101768 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:20:03.111812 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:20:03.114285 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:20:03.117592 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:20:03.139410 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:20:03.150792 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:20:03.169969 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:20:03.175709 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:20:03.181840 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:20:03.190734 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:20:03.191744 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:20:03.192093 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:20:03.205722 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:20:03.215287 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:20:03.220199 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:20:03.221329 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:20:03.245998 systemd[1]: Finished ensure-sysext.service. Feb 13 15:20:03.247921 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:20:03.271645 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:20:03.272362 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:20:03.288038 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:20:03.289342 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:20:03.291534 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:20:03.303539 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:20:03.304834 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:20:03.316241 lvm[1863]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:20:03.313792 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:20:03.313915 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:20:03.313960 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:20:03.336448 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:20:03.360971 augenrules[1886]: No rules Feb 13 15:20:03.364836 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:20:03.365311 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:20:03.369292 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:20:03.373107 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:20:03.392496 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:20:03.403393 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:20:03.409741 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:20:03.423195 lvm[1894]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:20:03.474900 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:20:03.532896 systemd-networkd[1836]: lo: Link UP Feb 13 15:20:03.533581 systemd-networkd[1836]: lo: Gained carrier Feb 13 15:20:03.536783 systemd-networkd[1836]: Enumeration completed Feb 13 15:20:03.537218 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:20:03.542965 systemd-networkd[1836]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:20:03.543149 systemd-networkd[1836]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:20:03.549454 systemd-networkd[1836]: eth0: Link UP Feb 13 15:20:03.551543 systemd-networkd[1836]: eth0: Gained carrier Feb 13 15:20:03.551811 systemd-networkd[1836]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:20:03.552145 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:20:03.562863 systemd-resolved[1837]: Positive Trust Anchors: Feb 13 15:20:03.562906 systemd-resolved[1837]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:20:03.562969 systemd-resolved[1837]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:20:03.567318 systemd-networkd[1836]: eth0: DHCPv4 address 172.31.28.163/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:20:03.583478 systemd-resolved[1837]: Defaulting to hostname 'linux'. Feb 13 15:20:03.586863 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:20:03.589240 systemd[1]: Reached target network.target - Network. Feb 13 15:20:03.591032 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:20:03.593286 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:20:03.595463 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:20:03.597844 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:20:03.600518 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:20:03.602710 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:20:03.604950 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:20:03.607299 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:20:03.607349 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:20:03.609052 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:20:03.612270 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:20:03.616906 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:20:03.625026 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:20:03.628058 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:20:03.630478 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:20:03.632560 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:20:03.634641 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:20:03.634692 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:20:03.641554 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:20:03.652507 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:20:03.662840 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:20:03.669415 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:20:03.676682 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:20:03.679371 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:20:03.685559 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:20:03.691627 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 15:20:03.701376 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:20:03.712448 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 15:20:03.720557 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:20:03.729500 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:20:03.739969 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:20:03.742715 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:20:03.745533 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:20:03.747700 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:20:03.758729 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:20:03.782313 jq[1911]: false Feb 13 15:20:03.805807 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:20:03.808287 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:20:03.830119 ntpd[1914]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:31:02 UTC 2025 (1): Starting Feb 13 15:20:03.838962 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:31:02 UTC 2025 (1): Starting Feb 13 15:20:03.838962 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:20:03.838962 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: ---------------------------------------------------- Feb 13 15:20:03.838962 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:20:03.838962 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:20:03.838962 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: corporation. Support and training for ntp-4 are Feb 13 15:20:03.838962 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: available at https://www.nwtime.org/support Feb 13 15:20:03.838962 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: ---------------------------------------------------- Feb 13 15:20:03.830196 ntpd[1914]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:20:03.830220 ntpd[1914]: ---------------------------------------------------- Feb 13 15:20:03.830241 ntpd[1914]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:20:03.830260 ntpd[1914]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:20:03.830278 ntpd[1914]: corporation. Support and training for ntp-4 are Feb 13 15:20:03.830296 ntpd[1914]: available at https://www.nwtime.org/support Feb 13 15:20:03.830313 ntpd[1914]: ---------------------------------------------------- Feb 13 15:20:03.860620 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: proto: precision = 0.096 usec (-23) Feb 13 15:20:03.859024 ntpd[1914]: proto: precision = 0.096 usec (-23) Feb 13 15:20:03.876059 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: basedate set to 2025-02-01 Feb 13 15:20:03.876059 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: gps base set to 2025-02-02 (week 2352) Feb 13 15:20:03.865051 ntpd[1914]: basedate set to 2025-02-01 Feb 13 15:20:03.865086 ntpd[1914]: gps base set to 2025-02-02 (week 2352) Feb 13 15:20:03.881881 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:20:03.882525 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:20:03.901746 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:20:03.901746 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:20:03.901746 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:20:03.901746 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: Listen normally on 3 eth0 172.31.28.163:123 Feb 13 15:20:03.901746 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: Listen normally on 4 lo [::1]:123 Feb 13 15:20:03.901746 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: bind(21) AF_INET6 fe80::446:88ff:fe95:9923%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:20:03.901746 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: unable to create socket on eth0 (5) for fe80::446:88ff:fe95:9923%2#123 Feb 13 15:20:03.901746 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: failed to init interface for address fe80::446:88ff:fe95:9923%2 Feb 13 15:20:03.901746 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: Listening on routing socket on fd #21 for interface updates Feb 13 15:20:03.898597 ntpd[1914]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:20:03.898680 ntpd[1914]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:20:03.899544 ntpd[1914]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:20:03.899611 ntpd[1914]: Listen normally on 3 eth0 172.31.28.163:123 Feb 13 15:20:03.899678 ntpd[1914]: Listen normally on 4 lo [::1]:123 Feb 13 15:20:03.899756 ntpd[1914]: bind(21) AF_INET6 fe80::446:88ff:fe95:9923%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:20:03.899793 ntpd[1914]: unable to create socket on eth0 (5) for fe80::446:88ff:fe95:9923%2#123 Feb 13 15:20:03.899820 ntpd[1914]: failed to init interface for address fe80::446:88ff:fe95:9923%2 Feb 13 15:20:03.899870 ntpd[1914]: Listening on routing socket on fd #21 for interface updates Feb 13 15:20:03.921307 tar[1925]: linux-arm64/helm Feb 13 15:20:03.921799 jq[1923]: true Feb 13 15:20:03.929392 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:20:03.935385 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:20:03.935385 ntpd[1914]: 13 Feb 15:20:03 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:20:03.929462 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:20:03.947973 dbus-daemon[1910]: [system] SELinux support is enabled Feb 13 15:20:03.948344 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:20:03.955027 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:20:03.955093 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:20:03.957724 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:20:03.957768 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:20:03.974637 dbus-daemon[1910]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1836 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 15:20:03.995266 extend-filesystems[1912]: Found loop4 Feb 13 15:20:03.995266 extend-filesystems[1912]: Found loop5 Feb 13 15:20:03.995266 extend-filesystems[1912]: Found loop6 Feb 13 15:20:03.995266 extend-filesystems[1912]: Found loop7 Feb 13 15:20:03.995266 extend-filesystems[1912]: Found nvme0n1 Feb 13 15:20:03.995266 extend-filesystems[1912]: Found nvme0n1p1 Feb 13 15:20:03.995266 extend-filesystems[1912]: Found nvme0n1p2 Feb 13 15:20:03.995266 extend-filesystems[1912]: Found nvme0n1p3 Feb 13 15:20:03.995266 extend-filesystems[1912]: Found usr Feb 13 15:20:03.995266 extend-filesystems[1912]: Found nvme0n1p4 Feb 13 15:20:03.987476 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 15:20:04.048121 extend-filesystems[1912]: Found nvme0n1p6 Feb 13 15:20:04.048121 extend-filesystems[1912]: Found nvme0n1p7 Feb 13 15:20:04.048121 extend-filesystems[1912]: Found nvme0n1p9 Feb 13 15:20:04.048121 extend-filesystems[1912]: Checking size of /dev/nvme0n1p9 Feb 13 15:20:03.991527 (ntainerd)[1940]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:20:04.006145 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:20:04.006652 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:20:04.086287 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:20:04.098986 update_engine[1922]: I20250213 15:20:04.098706 1922 main.cc:92] Flatcar Update Engine starting Feb 13 15:20:04.103541 jq[1946]: true Feb 13 15:20:04.113801 update_engine[1922]: I20250213 15:20:04.112263 1922 update_check_scheduler.cc:74] Next update check in 3m20s Feb 13 15:20:04.136008 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 15:20:04.141622 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:20:04.149484 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:20:04.167190 extend-filesystems[1912]: Resized partition /dev/nvme0n1p9 Feb 13 15:20:04.176928 extend-filesystems[1966]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:20:04.211066 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 15:20:04.263810 dbus-daemon[1910]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 15:20:04.264071 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 15:20:04.271600 systemd-logind[1921]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:20:04.271697 systemd-logind[1921]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 15:20:04.272070 systemd-logind[1921]: New seat seat0. Feb 13 15:20:04.273935 dbus-daemon[1910]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.4' (uid=0 pid=1952 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 15:20:04.276311 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:20:04.306066 coreos-metadata[1909]: Feb 13 15:20:04.305 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:20:04.360383 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 15:20:04.328819 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 15:20:04.360602 coreos-metadata[1909]: Feb 13 15:20:04.312 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 15:20:04.360602 coreos-metadata[1909]: Feb 13 15:20:04.316 INFO Fetch successful Feb 13 15:20:04.360602 coreos-metadata[1909]: Feb 13 15:20:04.316 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 15:20:04.360602 coreos-metadata[1909]: Feb 13 15:20:04.317 INFO Fetch successful Feb 13 15:20:04.360602 coreos-metadata[1909]: Feb 13 15:20:04.317 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 15:20:04.360602 coreos-metadata[1909]: Feb 13 15:20:04.321 INFO Fetch successful Feb 13 15:20:04.360602 coreos-metadata[1909]: Feb 13 15:20:04.321 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 15:20:04.360602 coreos-metadata[1909]: Feb 13 15:20:04.324 INFO Fetch successful Feb 13 15:20:04.360602 coreos-metadata[1909]: Feb 13 15:20:04.324 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 15:20:04.360602 coreos-metadata[1909]: Feb 13 15:20:04.330 INFO Fetch failed with 404: resource not found Feb 13 15:20:04.360602 coreos-metadata[1909]: Feb 13 15:20:04.330 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 15:20:04.360602 coreos-metadata[1909]: Feb 13 15:20:04.333 INFO Fetch successful Feb 13 15:20:04.360602 coreos-metadata[1909]: Feb 13 15:20:04.333 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 15:20:04.360602 coreos-metadata[1909]: Feb 13 15:20:04.339 INFO Fetch successful Feb 13 15:20:04.360602 coreos-metadata[1909]: Feb 13 15:20:04.340 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 15:20:04.360602 coreos-metadata[1909]: Feb 13 15:20:04.341 INFO Fetch successful Feb 13 15:20:04.360602 coreos-metadata[1909]: Feb 13 15:20:04.341 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 15:20:04.360602 coreos-metadata[1909]: Feb 13 15:20:04.347 INFO Fetch successful Feb 13 15:20:04.360602 coreos-metadata[1909]: Feb 13 15:20:04.347 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 15:20:04.360602 coreos-metadata[1909]: Feb 13 15:20:04.351 INFO Fetch successful Feb 13 15:20:04.364597 extend-filesystems[1966]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 15:20:04.364597 extend-filesystems[1966]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:20:04.364597 extend-filesystems[1966]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 15:20:04.393374 extend-filesystems[1912]: Resized filesystem in /dev/nvme0n1p9 Feb 13 15:20:04.397967 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:20:04.402308 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:20:04.433727 bash[1988]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:20:04.437283 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:20:04.456427 polkitd[1981]: Started polkitd version 121 Feb 13 15:20:04.475054 systemd[1]: Starting sshkeys.service... Feb 13 15:20:04.502791 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:20:04.510474 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:20:04.526009 polkitd[1981]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 15:20:04.526176 polkitd[1981]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 15:20:04.542232 polkitd[1981]: Finished loading, compiling and executing 2 rules Feb 13 15:20:04.543129 dbus-daemon[1910]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 15:20:04.544867 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:20:04.547950 polkitd[1981]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 15:20:04.548592 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 15:20:04.553173 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:20:04.606287 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1674) Feb 13 15:20:04.618086 systemd-hostnamed[1952]: Hostname set to (transient) Feb 13 15:20:04.618087 systemd-resolved[1837]: System hostname changed to 'ip-172-31-28-163'. Feb 13 15:20:04.755804 locksmithd[1963]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:20:04.831051 ntpd[1914]: bind(24) AF_INET6 fe80::446:88ff:fe95:9923%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:20:04.833564 ntpd[1914]: 13 Feb 15:20:04 ntpd[1914]: bind(24) AF_INET6 fe80::446:88ff:fe95:9923%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:20:04.833564 ntpd[1914]: 13 Feb 15:20:04 ntpd[1914]: unable to create socket on eth0 (6) for fe80::446:88ff:fe95:9923%2#123 Feb 13 15:20:04.833564 ntpd[1914]: 13 Feb 15:20:04 ntpd[1914]: failed to init interface for address fe80::446:88ff:fe95:9923%2 Feb 13 15:20:04.831118 ntpd[1914]: unable to create socket on eth0 (6) for fe80::446:88ff:fe95:9923%2#123 Feb 13 15:20:04.831146 ntpd[1914]: failed to init interface for address fe80::446:88ff:fe95:9923%2 Feb 13 15:20:04.835384 coreos-metadata[2007]: Feb 13 15:20:04.834 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:20:04.835384 coreos-metadata[2007]: Feb 13 15:20:04.834 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 15:20:04.837472 coreos-metadata[2007]: Feb 13 15:20:04.835 INFO Fetch successful Feb 13 15:20:04.837472 coreos-metadata[2007]: Feb 13 15:20:04.835 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 15:20:04.837472 coreos-metadata[2007]: Feb 13 15:20:04.836 INFO Fetch successful Feb 13 15:20:04.847353 unknown[2007]: wrote ssh authorized keys file for user: core Feb 13 15:20:04.872388 containerd[1940]: time="2025-02-13T15:20:04.871013329Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:20:04.896199 update-ssh-keys[2087]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:20:04.898916 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:20:04.910414 systemd[1]: Finished sshkeys.service. Feb 13 15:20:05.089670 containerd[1940]: time="2025-02-13T15:20:05.089573566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:05.093397 containerd[1940]: time="2025-02-13T15:20:05.092352346Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:20:05.093397 containerd[1940]: time="2025-02-13T15:20:05.092416150Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:20:05.093397 containerd[1940]: time="2025-02-13T15:20:05.092452990Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:20:05.093397 containerd[1940]: time="2025-02-13T15:20:05.092755630Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:20:05.093397 containerd[1940]: time="2025-02-13T15:20:05.092790754Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:05.093397 containerd[1940]: time="2025-02-13T15:20:05.092909002Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:20:05.093397 containerd[1940]: time="2025-02-13T15:20:05.092937802Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:05.093397 containerd[1940]: time="2025-02-13T15:20:05.093234154Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:20:05.093397 containerd[1940]: time="2025-02-13T15:20:05.093264790Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:05.093397 containerd[1940]: time="2025-02-13T15:20:05.093295354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:20:05.093397 containerd[1940]: time="2025-02-13T15:20:05.093319402Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:05.093923 containerd[1940]: time="2025-02-13T15:20:05.093482926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:05.093923 containerd[1940]: time="2025-02-13T15:20:05.093901582Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:05.094976 containerd[1940]: time="2025-02-13T15:20:05.094921666Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:20:05.094976 containerd[1940]: time="2025-02-13T15:20:05.094971658Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:20:05.095265 containerd[1940]: time="2025-02-13T15:20:05.095204626Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:20:05.095368 containerd[1940]: time="2025-02-13T15:20:05.095325514Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:20:05.105437 containerd[1940]: time="2025-02-13T15:20:05.105376342Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:20:05.105586 containerd[1940]: time="2025-02-13T15:20:05.105474178Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:20:05.105586 containerd[1940]: time="2025-02-13T15:20:05.105510106Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:20:05.105586 containerd[1940]: time="2025-02-13T15:20:05.105549214Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:20:05.105752 containerd[1940]: time="2025-02-13T15:20:05.105585394Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:20:05.105897 containerd[1940]: time="2025-02-13T15:20:05.105851578Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:20:05.106335 containerd[1940]: time="2025-02-13T15:20:05.106295074Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:20:05.106545 containerd[1940]: time="2025-02-13T15:20:05.106506082Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:20:05.106600 containerd[1940]: time="2025-02-13T15:20:05.106549582Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:20:05.106600 containerd[1940]: time="2025-02-13T15:20:05.106584250Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:20:05.106699 containerd[1940]: time="2025-02-13T15:20:05.106615858Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:20:05.106699 containerd[1940]: time="2025-02-13T15:20:05.106645426Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:20:05.106699 containerd[1940]: time="2025-02-13T15:20:05.106675042Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:20:05.106837 containerd[1940]: time="2025-02-13T15:20:05.106708330Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:20:05.106837 containerd[1940]: time="2025-02-13T15:20:05.106752778Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:20:05.106837 containerd[1940]: time="2025-02-13T15:20:05.106783774Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:20:05.106837 containerd[1940]: time="2025-02-13T15:20:05.106814974Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:20:05.107006 containerd[1940]: time="2025-02-13T15:20:05.106842382Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:20:05.107006 containerd[1940]: time="2025-02-13T15:20:05.106881982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:20:05.107006 containerd[1940]: time="2025-02-13T15:20:05.106913278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:20:05.107006 containerd[1940]: time="2025-02-13T15:20:05.106945042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:20:05.107006 containerd[1940]: time="2025-02-13T15:20:05.106978222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:20:05.107245 containerd[1940]: time="2025-02-13T15:20:05.107006326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:20:05.107245 containerd[1940]: time="2025-02-13T15:20:05.107036590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:20:05.107245 containerd[1940]: time="2025-02-13T15:20:05.107064442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:20:05.107245 containerd[1940]: time="2025-02-13T15:20:05.107093878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:20:05.107245 containerd[1940]: time="2025-02-13T15:20:05.107123134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:20:05.110034 containerd[1940]: time="2025-02-13T15:20:05.109286770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:20:05.110034 containerd[1940]: time="2025-02-13T15:20:05.109345486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:20:05.110034 containerd[1940]: time="2025-02-13T15:20:05.109429906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:20:05.110034 containerd[1940]: time="2025-02-13T15:20:05.109460794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:20:05.110034 containerd[1940]: time="2025-02-13T15:20:05.109493830Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:20:05.110034 containerd[1940]: time="2025-02-13T15:20:05.109542922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:20:05.110034 containerd[1940]: time="2025-02-13T15:20:05.109575490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:20:05.110034 containerd[1940]: time="2025-02-13T15:20:05.109601806Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:20:05.110034 containerd[1940]: time="2025-02-13T15:20:05.109748182Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:20:05.110034 containerd[1940]: time="2025-02-13T15:20:05.109788154Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:20:05.110034 containerd[1940]: time="2025-02-13T15:20:05.109813234Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:20:05.110034 containerd[1940]: time="2025-02-13T15:20:05.109841110Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:20:05.110034 containerd[1940]: time="2025-02-13T15:20:05.109866394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:20:05.110749 containerd[1940]: time="2025-02-13T15:20:05.109895266Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:20:05.110749 containerd[1940]: time="2025-02-13T15:20:05.109918462Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:20:05.110749 containerd[1940]: time="2025-02-13T15:20:05.109942714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:20:05.110884 containerd[1940]: time="2025-02-13T15:20:05.110441386Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:20:05.110884 containerd[1940]: time="2025-02-13T15:20:05.110530522Z" level=info msg="Connect containerd service" Feb 13 15:20:05.110884 containerd[1940]: time="2025-02-13T15:20:05.110589766Z" level=info msg="using legacy CRI server" Feb 13 15:20:05.110884 containerd[1940]: time="2025-02-13T15:20:05.110607802Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:20:05.110884 containerd[1940]: time="2025-02-13T15:20:05.110841430Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:20:05.127712 containerd[1940]: time="2025-02-13T15:20:05.126764338Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:20:05.127712 containerd[1940]: time="2025-02-13T15:20:05.127058254Z" level=info msg="Start subscribing containerd event" Feb 13 15:20:05.127712 containerd[1940]: time="2025-02-13T15:20:05.127136170Z" level=info msg="Start recovering state" Feb 13 15:20:05.127712 containerd[1940]: time="2025-02-13T15:20:05.127280566Z" level=info msg="Start event monitor" Feb 13 15:20:05.127712 containerd[1940]: time="2025-02-13T15:20:05.127307218Z" level=info msg="Start snapshots syncer" Feb 13 15:20:05.127712 containerd[1940]: time="2025-02-13T15:20:05.127328542Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:20:05.127712 containerd[1940]: time="2025-02-13T15:20:05.127346530Z" level=info msg="Start streaming server" Feb 13 15:20:05.129433 containerd[1940]: time="2025-02-13T15:20:05.129117802Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:20:05.130016 containerd[1940]: time="2025-02-13T15:20:05.129984250Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:20:05.131394 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:20:05.134770 containerd[1940]: time="2025-02-13T15:20:05.131449570Z" level=info msg="containerd successfully booted in 0.270741s" Feb 13 15:20:05.192182 sshd_keygen[1935]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:20:05.238275 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:20:05.254019 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:20:05.269862 systemd[1]: Started sshd@0-172.31.28.163:22-147.75.109.163:36186.service - OpenSSH per-connection server daemon (147.75.109.163:36186). Feb 13 15:20:05.292241 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:20:05.294340 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:20:05.306738 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:20:05.359118 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:20:05.372803 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:20:05.384769 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:20:05.387515 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:20:05.442328 systemd-networkd[1836]: eth0: Gained IPv6LL Feb 13 15:20:05.446490 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:20:05.452797 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:20:05.462793 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 15:20:05.479345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:05.491997 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:20:05.567251 sshd[2126]: Accepted publickey for core from 147.75.109.163 port 36186 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:20:05.573937 sshd-session[2126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:05.585248 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:20:05.609867 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:20:05.618962 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:20:05.633597 systemd-logind[1921]: New session 1 of user core. Feb 13 15:20:05.641709 amazon-ssm-agent[2136]: Initializing new seelog logger Feb 13 15:20:05.642239 amazon-ssm-agent[2136]: New Seelog Logger Creation Complete Feb 13 15:20:05.642239 amazon-ssm-agent[2136]: 2025/02/13 15:20:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:05.642239 amazon-ssm-agent[2136]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:05.647318 amazon-ssm-agent[2136]: 2025/02/13 15:20:05 processing appconfig overrides Feb 13 15:20:05.647318 amazon-ssm-agent[2136]: 2025/02/13 15:20:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:05.647318 amazon-ssm-agent[2136]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:05.647318 amazon-ssm-agent[2136]: 2025/02/13 15:20:05 processing appconfig overrides Feb 13 15:20:05.647318 amazon-ssm-agent[2136]: 2025/02/13 15:20:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:05.647318 amazon-ssm-agent[2136]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:05.647318 amazon-ssm-agent[2136]: 2025/02/13 15:20:05 processing appconfig overrides Feb 13 15:20:05.647318 amazon-ssm-agent[2136]: 2025-02-13 15:20:05 INFO Proxy environment variables: Feb 13 15:20:05.651249 amazon-ssm-agent[2136]: 2025/02/13 15:20:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:05.651249 amazon-ssm-agent[2136]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:05.651249 amazon-ssm-agent[2136]: 2025/02/13 15:20:05 processing appconfig overrides Feb 13 15:20:05.678226 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:20:05.694022 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:20:05.712406 (systemd)[2154]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:20:05.749791 amazon-ssm-agent[2136]: 2025-02-13 15:20:05 INFO https_proxy: Feb 13 15:20:05.845783 amazon-ssm-agent[2136]: 2025-02-13 15:20:05 INFO http_proxy: Feb 13 15:20:05.846204 tar[1925]: linux-arm64/LICENSE Feb 13 15:20:05.850132 tar[1925]: linux-arm64/README.md Feb 13 15:20:05.902793 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:20:05.944106 amazon-ssm-agent[2136]: 2025-02-13 15:20:05 INFO no_proxy: Feb 13 15:20:06.045057 amazon-ssm-agent[2136]: 2025-02-13 15:20:05 INFO Checking if agent identity type OnPrem can be assumed Feb 13 15:20:06.066754 systemd[2154]: Queued start job for default target default.target. Feb 13 15:20:06.074547 systemd[2154]: Created slice app.slice - User Application Slice. Feb 13 15:20:06.074727 systemd[2154]: Reached target paths.target - Paths. Feb 13 15:20:06.074759 systemd[2154]: Reached target timers.target - Timers. Feb 13 15:20:06.078779 systemd[2154]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:20:06.125594 systemd[2154]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:20:06.125867 systemd[2154]: Reached target sockets.target - Sockets. Feb 13 15:20:06.125901 systemd[2154]: Reached target basic.target - Basic System. Feb 13 15:20:06.126476 systemd[2154]: Reached target default.target - Main User Target. Feb 13 15:20:06.126694 systemd[2154]: Startup finished in 388ms. Feb 13 15:20:06.127082 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:20:06.139751 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:20:06.142510 amazon-ssm-agent[2136]: 2025-02-13 15:20:05 INFO Checking if agent identity type EC2 can be assumed Feb 13 15:20:06.242351 amazon-ssm-agent[2136]: 2025-02-13 15:20:05 INFO Agent will take identity from EC2 Feb 13 15:20:06.317904 systemd[1]: Started sshd@1-172.31.28.163:22-147.75.109.163:36188.service - OpenSSH per-connection server daemon (147.75.109.163:36188). Feb 13 15:20:06.345265 amazon-ssm-agent[2136]: 2025-02-13 15:20:05 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:20:06.442378 amazon-ssm-agent[2136]: 2025-02-13 15:20:05 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:20:06.542261 amazon-ssm-agent[2136]: 2025-02-13 15:20:05 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:20:06.543231 sshd[2171]: Accepted publickey for core from 147.75.109.163 port 36188 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:20:06.546893 sshd-session[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:06.560531 systemd-logind[1921]: New session 2 of user core. Feb 13 15:20:06.569434 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:20:06.640968 amazon-ssm-agent[2136]: 2025-02-13 15:20:05 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 15:20:06.718201 sshd[2173]: Connection closed by 147.75.109.163 port 36188 Feb 13 15:20:06.718029 sshd-session[2171]: pam_unix(sshd:session): session closed for user core Feb 13 15:20:06.728715 systemd[1]: sshd@1-172.31.28.163:22-147.75.109.163:36188.service: Deactivated successfully. Feb 13 15:20:06.733866 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:20:06.738423 systemd-logind[1921]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:20:06.741952 amazon-ssm-agent[2136]: 2025-02-13 15:20:05 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 15:20:06.769905 systemd[1]: Started sshd@2-172.31.28.163:22-147.75.109.163:36196.service - OpenSSH per-connection server daemon (147.75.109.163:36196). Feb 13 15:20:06.789279 systemd-logind[1921]: Removed session 2. Feb 13 15:20:06.842660 amazon-ssm-agent[2136]: 2025-02-13 15:20:05 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 15:20:06.943300 amazon-ssm-agent[2136]: 2025-02-13 15:20:05 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 15:20:07.029192 sshd[2178]: Accepted publickey for core from 147.75.109.163 port 36196 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:20:07.033752 sshd-session[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:07.043704 amazon-ssm-agent[2136]: 2025-02-13 15:20:05 INFO [Registrar] Starting registrar module Feb 13 15:20:07.052650 systemd-logind[1921]: New session 3 of user core. Feb 13 15:20:07.059525 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:20:07.145477 amazon-ssm-agent[2136]: 2025-02-13 15:20:05 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 15:20:07.211260 sshd[2180]: Connection closed by 147.75.109.163 port 36196 Feb 13 15:20:07.212526 sshd-session[2178]: pam_unix(sshd:session): session closed for user core Feb 13 15:20:07.223372 systemd[1]: sshd@2-172.31.28.163:22-147.75.109.163:36196.service: Deactivated successfully. Feb 13 15:20:07.227768 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:20:07.235642 systemd-logind[1921]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:20:07.240754 systemd-logind[1921]: Removed session 3. Feb 13 15:20:07.379584 amazon-ssm-agent[2136]: 2025-02-13 15:20:07 INFO [EC2Identity] EC2 registration was successful. Feb 13 15:20:07.393563 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:07.399124 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:20:07.405388 systemd[1]: Startup finished in 1.182s (kernel) + 9.343s (initrd) + 9.324s (userspace) = 19.851s. Feb 13 15:20:07.407774 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:20:07.415963 amazon-ssm-agent[2136]: 2025-02-13 15:20:07 INFO [CredentialRefresher] credentialRefresher has started Feb 13 15:20:07.415963 amazon-ssm-agent[2136]: 2025-02-13 15:20:07 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 15:20:07.415963 amazon-ssm-agent[2136]: 2025-02-13 15:20:07 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 15:20:07.441566 agetty[2133]: failed to open credentials directory Feb 13 15:20:07.441989 agetty[2134]: failed to open credentials directory Feb 13 15:20:07.480601 amazon-ssm-agent[2136]: 2025-02-13 15:20:07 INFO [CredentialRefresher] Next credential rotation will be in 30.858319786133332 minutes Feb 13 15:20:07.831250 ntpd[1914]: Listen normally on 7 eth0 [fe80::446:88ff:fe95:9923%2]:123 Feb 13 15:20:07.831884 ntpd[1914]: 13 Feb 15:20:07 ntpd[1914]: Listen normally on 7 eth0 [fe80::446:88ff:fe95:9923%2]:123 Feb 13 15:20:08.446449 amazon-ssm-agent[2136]: 2025-02-13 15:20:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 15:20:08.527213 kubelet[2189]: E0213 15:20:08.526113 2189 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:20:08.533513 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:20:08.533881 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:20:08.536336 systemd[1]: kubelet.service: Consumed 1.402s CPU time. Feb 13 15:20:08.547786 amazon-ssm-agent[2136]: 2025-02-13 15:20:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2201) started Feb 13 15:20:08.647946 amazon-ssm-agent[2136]: 2025-02-13 15:20:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 15:20:11.152504 systemd-resolved[1837]: Clock change detected. Flushing caches. Feb 13 15:20:17.565704 systemd[1]: Started sshd@3-172.31.28.163:22-147.75.109.163:34216.service - OpenSSH per-connection server daemon (147.75.109.163:34216). Feb 13 15:20:17.753179 sshd[2213]: Accepted publickey for core from 147.75.109.163 port 34216 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:20:17.755860 sshd-session[2213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:17.765928 systemd-logind[1921]: New session 4 of user core. Feb 13 15:20:17.771873 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:20:17.899117 sshd[2215]: Connection closed by 147.75.109.163 port 34216 Feb 13 15:20:17.899896 sshd-session[2213]: pam_unix(sshd:session): session closed for user core Feb 13 15:20:17.908898 systemd[1]: sshd@3-172.31.28.163:22-147.75.109.163:34216.service: Deactivated successfully. Feb 13 15:20:17.914196 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:20:17.916121 systemd-logind[1921]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:20:17.919073 systemd-logind[1921]: Removed session 4. Feb 13 15:20:17.943287 systemd[1]: Started sshd@4-172.31.28.163:22-147.75.109.163:34232.service - OpenSSH per-connection server daemon (147.75.109.163:34232). Feb 13 15:20:18.137265 sshd[2220]: Accepted publickey for core from 147.75.109.163 port 34232 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:20:18.139919 sshd-session[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:18.147894 systemd-logind[1921]: New session 5 of user core. Feb 13 15:20:18.158082 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:20:18.276938 sshd[2222]: Connection closed by 147.75.109.163 port 34232 Feb 13 15:20:18.277886 sshd-session[2220]: pam_unix(sshd:session): session closed for user core Feb 13 15:20:18.284652 systemd[1]: sshd@4-172.31.28.163:22-147.75.109.163:34232.service: Deactivated successfully. Feb 13 15:20:18.289542 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:20:18.292025 systemd-logind[1921]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:20:18.294657 systemd-logind[1921]: Removed session 5. Feb 13 15:20:18.322334 systemd[1]: Started sshd@5-172.31.28.163:22-147.75.109.163:34246.service - OpenSSH per-connection server daemon (147.75.109.163:34246). Feb 13 15:20:18.509664 sshd[2227]: Accepted publickey for core from 147.75.109.163 port 34246 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:20:18.512777 sshd-session[2227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:18.523968 systemd-logind[1921]: New session 6 of user core. Feb 13 15:20:18.527918 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:20:18.662387 sshd[2229]: Connection closed by 147.75.109.163 port 34246 Feb 13 15:20:18.661209 sshd-session[2227]: pam_unix(sshd:session): session closed for user core Feb 13 15:20:18.667418 systemd[1]: sshd@5-172.31.28.163:22-147.75.109.163:34246.service: Deactivated successfully. Feb 13 15:20:18.672081 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:20:18.675416 systemd-logind[1921]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:20:18.678177 systemd-logind[1921]: Removed session 6. Feb 13 15:20:18.697859 systemd[1]: Started sshd@6-172.31.28.163:22-147.75.109.163:34256.service - OpenSSH per-connection server daemon (147.75.109.163:34256). Feb 13 15:20:18.859638 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:20:18.870905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:18.896599 sshd[2234]: Accepted publickey for core from 147.75.109.163 port 34256 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:20:18.897007 sshd-session[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:18.905963 systemd-logind[1921]: New session 7 of user core. Feb 13 15:20:18.914923 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:20:19.048815 sudo[2240]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:20:19.049781 sudo[2240]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:20:19.074012 sudo[2240]: pam_unix(sudo:session): session closed for user root Feb 13 15:20:19.097748 sshd[2239]: Connection closed by 147.75.109.163 port 34256 Feb 13 15:20:19.098992 sshd-session[2234]: pam_unix(sshd:session): session closed for user core Feb 13 15:20:19.107934 systemd[1]: sshd@6-172.31.28.163:22-147.75.109.163:34256.service: Deactivated successfully. Feb 13 15:20:19.113588 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:20:19.116892 systemd-logind[1921]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:20:19.118965 systemd-logind[1921]: Removed session 7. Feb 13 15:20:19.145819 systemd[1]: Started sshd@7-172.31.28.163:22-147.75.109.163:34268.service - OpenSSH per-connection server daemon (147.75.109.163:34268). Feb 13 15:20:19.253691 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:19.262254 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:20:19.347899 kubelet[2252]: E0213 15:20:19.347811 2252 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:20:19.357372 sshd[2245]: Accepted publickey for core from 147.75.109.163 port 34268 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:20:19.356954 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:20:19.357252 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:20:19.359624 sshd-session[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:19.368635 systemd-logind[1921]: New session 8 of user core. Feb 13 15:20:19.378827 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:20:19.483864 sudo[2262]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:20:19.484528 sudo[2262]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:20:19.490594 sudo[2262]: pam_unix(sudo:session): session closed for user root Feb 13 15:20:19.500526 sudo[2261]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:20:19.501270 sudo[2261]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:20:19.524215 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:20:19.599605 augenrules[2284]: No rules Feb 13 15:20:19.602944 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:20:19.603450 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:20:19.605902 sudo[2261]: pam_unix(sudo:session): session closed for user root Feb 13 15:20:19.632594 sshd[2260]: Connection closed by 147.75.109.163 port 34268 Feb 13 15:20:19.633372 sshd-session[2245]: pam_unix(sshd:session): session closed for user core Feb 13 15:20:19.641354 systemd[1]: sshd@7-172.31.28.163:22-147.75.109.163:34268.service: Deactivated successfully. Feb 13 15:20:19.644622 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:20:19.646167 systemd-logind[1921]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:20:19.648458 systemd-logind[1921]: Removed session 8. Feb 13 15:20:19.675079 systemd[1]: Started sshd@8-172.31.28.163:22-147.75.109.163:36060.service - OpenSSH per-connection server daemon (147.75.109.163:36060). Feb 13 15:20:19.858230 sshd[2292]: Accepted publickey for core from 147.75.109.163 port 36060 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:20:19.860925 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:19.869612 systemd-logind[1921]: New session 9 of user core. Feb 13 15:20:19.881886 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:20:19.987906 sudo[2295]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:20:19.988694 sudo[2295]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:20:20.574079 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:20:20.577486 (dockerd)[2312]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:20:20.943160 dockerd[2312]: time="2025-02-13T15:20:20.943039196Z" level=info msg="Starting up" Feb 13 15:20:21.081541 systemd[1]: var-lib-docker-metacopy\x2dcheck2383554200-merged.mount: Deactivated successfully. Feb 13 15:20:21.096326 dockerd[2312]: time="2025-02-13T15:20:21.095826785Z" level=info msg="Loading containers: start." Feb 13 15:20:21.370705 kernel: Initializing XFRM netlink socket Feb 13 15:20:21.402719 (udev-worker)[2336]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:20:21.495539 systemd-networkd[1836]: docker0: Link UP Feb 13 15:20:21.534905 dockerd[2312]: time="2025-02-13T15:20:21.534838063Z" level=info msg="Loading containers: done." Feb 13 15:20:21.556223 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck393236757-merged.mount: Deactivated successfully. Feb 13 15:20:21.561100 dockerd[2312]: time="2025-02-13T15:20:21.561039403Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:20:21.561273 dockerd[2312]: time="2025-02-13T15:20:21.561186199Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:20:21.561456 dockerd[2312]: time="2025-02-13T15:20:21.561411163Z" level=info msg="Daemon has completed initialization" Feb 13 15:20:21.614789 dockerd[2312]: time="2025-02-13T15:20:21.614694199Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:20:21.615181 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:20:22.842374 containerd[1940]: time="2025-02-13T15:20:22.842313034Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 15:20:23.453343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2611383576.mount: Deactivated successfully. Feb 13 15:20:25.117342 containerd[1940]: time="2025-02-13T15:20:25.117255045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:25.119409 containerd[1940]: time="2025-02-13T15:20:25.119334141Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865207" Feb 13 15:20:25.120420 containerd[1940]: time="2025-02-13T15:20:25.120335829Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:25.126066 containerd[1940]: time="2025-02-13T15:20:25.125978613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:25.128645 containerd[1940]: time="2025-02-13T15:20:25.128338677Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.285959931s" Feb 13 15:20:25.128645 containerd[1940]: time="2025-02-13T15:20:25.128403021Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 15:20:25.167339 containerd[1940]: time="2025-02-13T15:20:25.167251557Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 15:20:26.948928 containerd[1940]: time="2025-02-13T15:20:26.948852290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:26.951202 containerd[1940]: time="2025-02-13T15:20:26.951089054Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898594" Feb 13 15:20:26.954190 containerd[1940]: time="2025-02-13T15:20:26.953498822Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:26.965183 containerd[1940]: time="2025-02-13T15:20:26.965084306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:26.970444 containerd[1940]: time="2025-02-13T15:20:26.970354670Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.803006969s" Feb 13 15:20:26.970836 containerd[1940]: time="2025-02-13T15:20:26.970772498Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 15:20:27.024117 containerd[1940]: time="2025-02-13T15:20:27.024068602Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 15:20:28.189982 containerd[1940]: time="2025-02-13T15:20:28.189682776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:28.192031 containerd[1940]: time="2025-02-13T15:20:28.191919300Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164934" Feb 13 15:20:28.192997 containerd[1940]: time="2025-02-13T15:20:28.192418992Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:28.198795 containerd[1940]: time="2025-02-13T15:20:28.198696336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:28.201365 containerd[1940]: time="2025-02-13T15:20:28.201291288Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.176979218s" Feb 13 15:20:28.201753 containerd[1940]: time="2025-02-13T15:20:28.201595416Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 15:20:28.250101 containerd[1940]: time="2025-02-13T15:20:28.250032732Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:20:29.381376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:20:29.391852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:29.723156 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:29.733135 (kubelet)[2596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:20:29.838583 kubelet[2596]: E0213 15:20:29.838328 2596 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:20:29.846912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4192931773.mount: Deactivated successfully. Feb 13 15:20:29.848769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:20:29.849506 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:20:30.350460 containerd[1940]: time="2025-02-13T15:20:30.350382495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:30.352425 containerd[1940]: time="2025-02-13T15:20:30.352278087Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663370" Feb 13 15:20:30.353241 containerd[1940]: time="2025-02-13T15:20:30.352681515Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:30.357094 containerd[1940]: time="2025-02-13T15:20:30.357007683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:30.358963 containerd[1940]: time="2025-02-13T15:20:30.358889247Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 2.108777699s" Feb 13 15:20:30.358963 containerd[1940]: time="2025-02-13T15:20:30.358954467Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 15:20:30.405462 containerd[1940]: time="2025-02-13T15:20:30.405392451Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:20:30.949796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3687558969.mount: Deactivated successfully. Feb 13 15:20:32.192253 containerd[1940]: time="2025-02-13T15:20:32.191888584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:32.195004 containerd[1940]: time="2025-02-13T15:20:32.194880520Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 15:20:32.202518 containerd[1940]: time="2025-02-13T15:20:32.199144036Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:32.206415 containerd[1940]: time="2025-02-13T15:20:32.206293576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:32.216152 containerd[1940]: time="2025-02-13T15:20:32.216030244Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.810571385s" Feb 13 15:20:32.216152 containerd[1940]: time="2025-02-13T15:20:32.216143140Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:20:32.264082 containerd[1940]: time="2025-02-13T15:20:32.263739172Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:20:32.734896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount273383062.mount: Deactivated successfully. Feb 13 15:20:32.740947 containerd[1940]: time="2025-02-13T15:20:32.740868907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:32.742678 containerd[1940]: time="2025-02-13T15:20:32.742525147Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Feb 13 15:20:32.743368 containerd[1940]: time="2025-02-13T15:20:32.743296507Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:32.748114 containerd[1940]: time="2025-02-13T15:20:32.748000771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:32.750388 containerd[1940]: time="2025-02-13T15:20:32.750094267Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 486.290031ms" Feb 13 15:20:32.750388 containerd[1940]: time="2025-02-13T15:20:32.750173059Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:20:32.797514 containerd[1940]: time="2025-02-13T15:20:32.797119051Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 15:20:33.312860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1928618789.mount: Deactivated successfully. Feb 13 15:20:34.978820 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 15:20:35.837647 containerd[1940]: time="2025-02-13T15:20:35.837301846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:35.866845 containerd[1940]: time="2025-02-13T15:20:35.866755522Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Feb 13 15:20:35.912080 containerd[1940]: time="2025-02-13T15:20:35.911986606Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:35.929218 containerd[1940]: time="2025-02-13T15:20:35.929123795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:35.931315 containerd[1940]: time="2025-02-13T15:20:35.931028303Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.13384768s" Feb 13 15:20:35.931315 containerd[1940]: time="2025-02-13T15:20:35.931102211Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 15:20:39.881588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:20:39.893756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:40.272914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:40.278700 (kubelet)[2784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:20:40.407190 kubelet[2784]: E0213 15:20:40.407013 2784 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:20:40.413048 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:20:40.413487 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:20:44.768078 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:44.777034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:44.827196 systemd[1]: Reloading requested from client PID 2798 ('systemctl') (unit session-9.scope)... Feb 13 15:20:44.827232 systemd[1]: Reloading... Feb 13 15:20:45.089269 zram_generator::config[2841]: No configuration found. Feb 13 15:20:45.326863 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:20:45.503217 systemd[1]: Reloading finished in 675 ms. Feb 13 15:20:45.613464 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:20:45.613715 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:20:45.614410 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:45.624269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:45.951889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:45.971289 (kubelet)[2901]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:20:46.065616 kubelet[2901]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:20:46.065616 kubelet[2901]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:20:46.065616 kubelet[2901]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:20:46.065616 kubelet[2901]: I0213 15:20:46.065054 2901 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:20:47.170517 kubelet[2901]: I0213 15:20:47.170454 2901 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:20:47.170517 kubelet[2901]: I0213 15:20:47.170502 2901 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:20:47.171154 kubelet[2901]: I0213 15:20:47.170901 2901 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:20:47.204404 kubelet[2901]: E0213 15:20:47.204364 2901 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.28.163:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.28.163:6443: connect: connection refused Feb 13 15:20:47.206124 kubelet[2901]: I0213 15:20:47.205928 2901 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:20:47.219727 kubelet[2901]: I0213 15:20:47.219677 2901 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:20:47.222327 kubelet[2901]: I0213 15:20:47.222242 2901 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:20:47.222668 kubelet[2901]: I0213 15:20:47.222320 2901 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-163","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:20:47.222870 kubelet[2901]: I0213 15:20:47.222688 2901 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:20:47.222870 kubelet[2901]: I0213 15:20:47.222712 2901 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:20:47.222992 kubelet[2901]: I0213 15:20:47.222980 2901 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:20:47.224434 kubelet[2901]: I0213 15:20:47.224379 2901 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:20:47.224434 kubelet[2901]: I0213 15:20:47.224427 2901 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:20:47.224615 kubelet[2901]: I0213 15:20:47.224504 2901 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:20:47.224615 kubelet[2901]: I0213 15:20:47.224604 2901 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:20:47.226675 kubelet[2901]: W0213 15:20:47.226348 2901 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-163&limit=500&resourceVersion=0": dial tcp 172.31.28.163:6443: connect: connection refused Feb 13 15:20:47.226675 kubelet[2901]: E0213 15:20:47.226430 2901 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.28.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-163&limit=500&resourceVersion=0": dial tcp 172.31.28.163:6443: connect: connection refused Feb 13 15:20:47.226675 kubelet[2901]: W0213 15:20:47.226533 2901 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.163:6443: connect: connection refused Feb 13 15:20:47.226675 kubelet[2901]: E0213 15:20:47.226642 2901 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.28.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.163:6443: connect: connection refused Feb 13 15:20:47.227169 kubelet[2901]: I0213 15:20:47.227140 2901 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:20:47.227666 kubelet[2901]: I0213 15:20:47.227643 2901 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:20:47.227843 kubelet[2901]: W0213 15:20:47.227822 2901 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:20:47.229747 kubelet[2901]: I0213 15:20:47.229395 2901 server.go:1264] "Started kubelet" Feb 13 15:20:47.241351 kubelet[2901]: I0213 15:20:47.241309 2901 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:20:47.244535 kubelet[2901]: I0213 15:20:47.244468 2901 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:20:47.247788 kubelet[2901]: I0213 15:20:47.247734 2901 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:20:47.251590 kubelet[2901]: I0213 15:20:47.251446 2901 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:20:47.251961 kubelet[2901]: I0213 15:20:47.251919 2901 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:20:47.252417 kubelet[2901]: E0213 15:20:47.252190 2901 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.163:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.163:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-163.1823cdb268171567 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-163,UID:ip-172-31-28-163,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-163,},FirstTimestamp:2025-02-13 15:20:47.229359463 +0000 UTC m=+1.250309468,LastTimestamp:2025-02-13 15:20:47.229359463 +0000 UTC m=+1.250309468,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-163,}" Feb 13 15:20:47.253872 kubelet[2901]: I0213 15:20:47.253820 2901 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:20:47.258312 kubelet[2901]: I0213 15:20:47.258264 2901 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:20:47.260575 kubelet[2901]: I0213 15:20:47.260079 2901 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:20:47.262751 kubelet[2901]: E0213 15:20:47.262668 2901 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-163?timeout=10s\": dial tcp 172.31.28.163:6443: connect: connection refused" interval="200ms" Feb 13 15:20:47.268121 kubelet[2901]: W0213 15:20:47.267251 2901 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.163:6443: connect: connection refused Feb 13 15:20:47.268121 kubelet[2901]: E0213 15:20:47.267352 2901 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.28.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.163:6443: connect: connection refused Feb 13 15:20:47.268121 kubelet[2901]: E0213 15:20:47.268101 2901 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:20:47.268704 kubelet[2901]: I0213 15:20:47.268664 2901 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:20:47.268704 kubelet[2901]: I0213 15:20:47.268697 2901 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:20:47.268861 kubelet[2901]: I0213 15:20:47.268823 2901 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:20:47.284388 kubelet[2901]: I0213 15:20:47.284314 2901 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:20:47.286702 kubelet[2901]: I0213 15:20:47.286642 2901 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:20:47.286830 kubelet[2901]: I0213 15:20:47.286753 2901 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:20:47.286830 kubelet[2901]: I0213 15:20:47.286794 2901 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:20:47.286995 kubelet[2901]: E0213 15:20:47.286870 2901 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:20:47.298399 kubelet[2901]: W0213 15:20:47.298245 2901 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.163:6443: connect: connection refused Feb 13 15:20:47.298570 kubelet[2901]: E0213 15:20:47.298357 2901 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.28.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.163:6443: connect: connection refused Feb 13 15:20:47.318339 kubelet[2901]: I0213 15:20:47.318299 2901 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:20:47.318339 kubelet[2901]: I0213 15:20:47.318331 2901 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:20:47.318579 kubelet[2901]: I0213 15:20:47.318367 2901 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:20:47.321971 kubelet[2901]: I0213 15:20:47.321877 2901 policy_none.go:49] "None policy: Start" Feb 13 15:20:47.323541 kubelet[2901]: I0213 15:20:47.323462 2901 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:20:47.323541 kubelet[2901]: I0213 15:20:47.323567 2901 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:20:47.336295 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:20:47.354058 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:20:47.359613 kubelet[2901]: I0213 15:20:47.359052 2901 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-163" Feb 13 15:20:47.359768 kubelet[2901]: E0213 15:20:47.359691 2901 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.163:6443/api/v1/nodes\": dial tcp 172.31.28.163:6443: connect: connection refused" node="ip-172-31-28-163" Feb 13 15:20:47.364379 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:20:47.376291 kubelet[2901]: I0213 15:20:47.375794 2901 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:20:47.376291 kubelet[2901]: I0213 15:20:47.376100 2901 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:20:47.376291 kubelet[2901]: I0213 15:20:47.376270 2901 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:20:47.379518 kubelet[2901]: E0213 15:20:47.379324 2901 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-163\" not found" Feb 13 15:20:47.387660 kubelet[2901]: I0213 15:20:47.387599 2901 topology_manager.go:215] "Topology Admit Handler" podUID="c2ac3049b59e014d12a8b898cf95aaac" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-28-163" Feb 13 15:20:47.390318 kubelet[2901]: I0213 15:20:47.390108 2901 topology_manager.go:215] "Topology Admit Handler" podUID="6630366fd57a324ea231719c822ca9ff" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-28-163" Feb 13 15:20:47.393390 kubelet[2901]: I0213 15:20:47.393250 2901 topology_manager.go:215] "Topology Admit Handler" podUID="e89d88a02bed9fc387d96b661ec04439" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-28-163" Feb 13 15:20:47.408827 systemd[1]: Created slice kubepods-burstable-podc2ac3049b59e014d12a8b898cf95aaac.slice - libcontainer container kubepods-burstable-podc2ac3049b59e014d12a8b898cf95aaac.slice. Feb 13 15:20:47.427310 systemd[1]: Created slice kubepods-burstable-pod6630366fd57a324ea231719c822ca9ff.slice - libcontainer container kubepods-burstable-pod6630366fd57a324ea231719c822ca9ff.slice. Feb 13 15:20:47.441260 systemd[1]: Created slice kubepods-burstable-pode89d88a02bed9fc387d96b661ec04439.slice - libcontainer container kubepods-burstable-pode89d88a02bed9fc387d96b661ec04439.slice. Feb 13 15:20:47.462265 kubelet[2901]: I0213 15:20:47.462183 2901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2ac3049b59e014d12a8b898cf95aaac-ca-certs\") pod \"kube-apiserver-ip-172-31-28-163\" (UID: \"c2ac3049b59e014d12a8b898cf95aaac\") " pod="kube-system/kube-apiserver-ip-172-31-28-163" Feb 13 15:20:47.462489 kubelet[2901]: I0213 15:20:47.462280 2901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6630366fd57a324ea231719c822ca9ff-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-163\" (UID: \"6630366fd57a324ea231719c822ca9ff\") " pod="kube-system/kube-controller-manager-ip-172-31-28-163" Feb 13 15:20:47.462489 kubelet[2901]: I0213 15:20:47.462332 2901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6630366fd57a324ea231719c822ca9ff-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-163\" (UID: \"6630366fd57a324ea231719c822ca9ff\") " pod="kube-system/kube-controller-manager-ip-172-31-28-163" Feb 13 15:20:47.462489 kubelet[2901]: I0213 15:20:47.462380 2901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6630366fd57a324ea231719c822ca9ff-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-163\" (UID: \"6630366fd57a324ea231719c822ca9ff\") " pod="kube-system/kube-controller-manager-ip-172-31-28-163" Feb 13 15:20:47.462489 kubelet[2901]: I0213 15:20:47.462435 2901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e89d88a02bed9fc387d96b661ec04439-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-163\" (UID: \"e89d88a02bed9fc387d96b661ec04439\") " pod="kube-system/kube-scheduler-ip-172-31-28-163" Feb 13 15:20:47.462489 kubelet[2901]: I0213 15:20:47.462478 2901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2ac3049b59e014d12a8b898cf95aaac-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-163\" (UID: \"c2ac3049b59e014d12a8b898cf95aaac\") " pod="kube-system/kube-apiserver-ip-172-31-28-163" Feb 13 15:20:47.462867 kubelet[2901]: I0213 15:20:47.462537 2901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2ac3049b59e014d12a8b898cf95aaac-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-163\" (UID: \"c2ac3049b59e014d12a8b898cf95aaac\") " pod="kube-system/kube-apiserver-ip-172-31-28-163" Feb 13 15:20:47.462867 kubelet[2901]: I0213 15:20:47.462618 2901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6630366fd57a324ea231719c822ca9ff-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-163\" (UID: \"6630366fd57a324ea231719c822ca9ff\") " pod="kube-system/kube-controller-manager-ip-172-31-28-163" Feb 13 15:20:47.462867 kubelet[2901]: I0213 15:20:47.462668 2901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6630366fd57a324ea231719c822ca9ff-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-163\" (UID: \"6630366fd57a324ea231719c822ca9ff\") " pod="kube-system/kube-controller-manager-ip-172-31-28-163" Feb 13 15:20:47.463623 kubelet[2901]: E0213 15:20:47.463509 2901 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-163?timeout=10s\": dial tcp 172.31.28.163:6443: connect: connection refused" interval="400ms" Feb 13 15:20:47.562080 kubelet[2901]: I0213 15:20:47.562031 2901 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-163" Feb 13 15:20:47.562697 kubelet[2901]: E0213 15:20:47.562627 2901 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.163:6443/api/v1/nodes\": dial tcp 172.31.28.163:6443: connect: connection refused" node="ip-172-31-28-163" Feb 13 15:20:47.723650 containerd[1940]: time="2025-02-13T15:20:47.723452325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-163,Uid:c2ac3049b59e014d12a8b898cf95aaac,Namespace:kube-system,Attempt:0,}" Feb 13 15:20:47.737145 containerd[1940]: time="2025-02-13T15:20:47.736962861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-163,Uid:6630366fd57a324ea231719c822ca9ff,Namespace:kube-system,Attempt:0,}" Feb 13 15:20:47.748096 containerd[1940]: time="2025-02-13T15:20:47.747969129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-163,Uid:e89d88a02bed9fc387d96b661ec04439,Namespace:kube-system,Attempt:0,}" Feb 13 15:20:47.864523 kubelet[2901]: E0213 15:20:47.864456 2901 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-163?timeout=10s\": dial tcp 172.31.28.163:6443: connect: connection refused" interval="800ms" Feb 13 15:20:47.964992 kubelet[2901]: I0213 15:20:47.964940 2901 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-163" Feb 13 15:20:47.965420 kubelet[2901]: E0213 15:20:47.965376 2901 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.163:6443/api/v1/nodes\": dial tcp 172.31.28.163:6443: connect: connection refused" node="ip-172-31-28-163" Feb 13 15:20:48.070945 kubelet[2901]: W0213 15:20:48.070686 2901 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.163:6443: connect: connection refused Feb 13 15:20:48.070945 kubelet[2901]: E0213 15:20:48.070776 2901 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.28.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.163:6443: connect: connection refused Feb 13 15:20:48.202822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount290908071.mount: Deactivated successfully. Feb 13 15:20:48.213730 containerd[1940]: time="2025-02-13T15:20:48.213616352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:20:48.216640 containerd[1940]: time="2025-02-13T15:20:48.216534572Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:20:48.219201 containerd[1940]: time="2025-02-13T15:20:48.219091700Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 15:20:48.220602 containerd[1940]: time="2025-02-13T15:20:48.220457396Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:20:48.223940 containerd[1940]: time="2025-02-13T15:20:48.223856120Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:20:48.226491 containerd[1940]: time="2025-02-13T15:20:48.226275308Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:20:48.227024 containerd[1940]: time="2025-02-13T15:20:48.226744880Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:20:48.237036 containerd[1940]: time="2025-02-13T15:20:48.236906780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:20:48.241728 containerd[1940]: time="2025-02-13T15:20:48.240711020Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 503.607735ms" Feb 13 15:20:48.243002 kubelet[2901]: W0213 15:20:48.242726 2901 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.163:6443: connect: connection refused Feb 13 15:20:48.243002 kubelet[2901]: E0213 15:20:48.242829 2901 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.28.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.163:6443: connect: connection refused Feb 13 15:20:48.245131 containerd[1940]: time="2025-02-13T15:20:48.245040152Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 521.423007ms" Feb 13 15:20:48.246709 containerd[1940]: time="2025-02-13T15:20:48.246639116Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 498.553059ms" Feb 13 15:20:48.311922 kubelet[2901]: W0213 15:20:48.310259 2901 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-163&limit=500&resourceVersion=0": dial tcp 172.31.28.163:6443: connect: connection refused Feb 13 15:20:48.311922 kubelet[2901]: E0213 15:20:48.310400 2901 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.28.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-163&limit=500&resourceVersion=0": dial tcp 172.31.28.163:6443: connect: connection refused Feb 13 15:20:48.457187 containerd[1940]: time="2025-02-13T15:20:48.457003965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:20:48.459714 containerd[1940]: time="2025-02-13T15:20:48.459288057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:20:48.459714 containerd[1940]: time="2025-02-13T15:20:48.459339657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:20:48.459714 containerd[1940]: time="2025-02-13T15:20:48.459517557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:20:48.466816 containerd[1940]: time="2025-02-13T15:20:48.466621449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:20:48.469438 containerd[1940]: time="2025-02-13T15:20:48.469274601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:20:48.469969 containerd[1940]: time="2025-02-13T15:20:48.469589673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:20:48.469969 containerd[1940]: time="2025-02-13T15:20:48.469641777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:20:48.469969 containerd[1940]: time="2025-02-13T15:20:48.469827321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:20:48.477025 containerd[1940]: time="2025-02-13T15:20:48.476643309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:20:48.477025 containerd[1940]: time="2025-02-13T15:20:48.476704713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:20:48.477025 containerd[1940]: time="2025-02-13T15:20:48.476888817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:20:48.480240 kubelet[2901]: W0213 15:20:48.480167 2901 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.163:6443: connect: connection refused Feb 13 15:20:48.480765 kubelet[2901]: E0213 15:20:48.480521 2901 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.28.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.163:6443: connect: connection refused Feb 13 15:20:48.521939 systemd[1]: Started cri-containerd-8cabebce634e472184543c356372754d4413e0d575de02c1ee72ae4a4b63e0d3.scope - libcontainer container 8cabebce634e472184543c356372754d4413e0d575de02c1ee72ae4a4b63e0d3. Feb 13 15:20:48.536826 systemd[1]: Started cri-containerd-1c806d0a35f87c9aa6e1721ca7a50e00aeac7ab4330f7ae490ae67580ee2e10b.scope - libcontainer container 1c806d0a35f87c9aa6e1721ca7a50e00aeac7ab4330f7ae490ae67580ee2e10b. Feb 13 15:20:48.541346 systemd[1]: Started cri-containerd-c7024955f9722a3eed9ca9c80adab8a61a02ebd8e7376546db231b171192dbe9.scope - libcontainer container c7024955f9722a3eed9ca9c80adab8a61a02ebd8e7376546db231b171192dbe9. Feb 13 15:20:48.661922 containerd[1940]: time="2025-02-13T15:20:48.661852042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-163,Uid:c2ac3049b59e014d12a8b898cf95aaac,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cabebce634e472184543c356372754d4413e0d575de02c1ee72ae4a4b63e0d3\"" Feb 13 15:20:48.665955 kubelet[2901]: E0213 15:20:48.665859 2901 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-163?timeout=10s\": dial tcp 172.31.28.163:6443: connect: connection refused" interval="1.6s" Feb 13 15:20:48.705101 containerd[1940]: time="2025-02-13T15:20:48.704890282Z" level=info msg="CreateContainer within sandbox \"8cabebce634e472184543c356372754d4413e0d575de02c1ee72ae4a4b63e0d3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:20:48.726474 containerd[1940]: time="2025-02-13T15:20:48.724728910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-163,Uid:e89d88a02bed9fc387d96b661ec04439,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c806d0a35f87c9aa6e1721ca7a50e00aeac7ab4330f7ae490ae67580ee2e10b\"" Feb 13 15:20:48.738852 containerd[1940]: time="2025-02-13T15:20:48.738482458Z" level=info msg="CreateContainer within sandbox \"1c806d0a35f87c9aa6e1721ca7a50e00aeac7ab4330f7ae490ae67580ee2e10b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:20:48.742464 containerd[1940]: time="2025-02-13T15:20:48.741912994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-163,Uid:6630366fd57a324ea231719c822ca9ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7024955f9722a3eed9ca9c80adab8a61a02ebd8e7376546db231b171192dbe9\"" Feb 13 15:20:48.743616 containerd[1940]: time="2025-02-13T15:20:48.743293954Z" level=info msg="CreateContainer within sandbox \"8cabebce634e472184543c356372754d4413e0d575de02c1ee72ae4a4b63e0d3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c95c5ae0b8155b2722b939f1b6372c06ff7cdfc2272cb08358ff793ff3c52075\"" Feb 13 15:20:48.745218 containerd[1940]: time="2025-02-13T15:20:48.745166326Z" level=info msg="StartContainer for \"c95c5ae0b8155b2722b939f1b6372c06ff7cdfc2272cb08358ff793ff3c52075\"" Feb 13 15:20:48.755944 containerd[1940]: time="2025-02-13T15:20:48.755818846Z" level=info msg="CreateContainer within sandbox \"c7024955f9722a3eed9ca9c80adab8a61a02ebd8e7376546db231b171192dbe9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:20:48.768590 kubelet[2901]: I0213 15:20:48.767849 2901 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-163" Feb 13 15:20:48.768998 kubelet[2901]: E0213 15:20:48.768940 2901 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.163:6443/api/v1/nodes\": dial tcp 172.31.28.163:6443: connect: connection refused" node="ip-172-31-28-163" Feb 13 15:20:48.782594 containerd[1940]: time="2025-02-13T15:20:48.782475874Z" level=info msg="CreateContainer within sandbox \"1c806d0a35f87c9aa6e1721ca7a50e00aeac7ab4330f7ae490ae67580ee2e10b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fd081f011253221b492982f0f588cf406c54006f25ed8db918d2b59d60e4a765\"" Feb 13 15:20:48.785040 containerd[1940]: time="2025-02-13T15:20:48.783774514Z" level=info msg="StartContainer for \"fd081f011253221b492982f0f588cf406c54006f25ed8db918d2b59d60e4a765\"" Feb 13 15:20:48.786454 containerd[1940]: time="2025-02-13T15:20:48.786377554Z" level=info msg="CreateContainer within sandbox \"c7024955f9722a3eed9ca9c80adab8a61a02ebd8e7376546db231b171192dbe9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3ad7675e545e70bf80853d59b060dee2d82caa5e5d254ff647a40c7333b7d09e\"" Feb 13 15:20:48.787321 containerd[1940]: time="2025-02-13T15:20:48.787241230Z" level=info msg="StartContainer for \"3ad7675e545e70bf80853d59b060dee2d82caa5e5d254ff647a40c7333b7d09e\"" Feb 13 15:20:48.828920 systemd[1]: Started cri-containerd-c95c5ae0b8155b2722b939f1b6372c06ff7cdfc2272cb08358ff793ff3c52075.scope - libcontainer container c95c5ae0b8155b2722b939f1b6372c06ff7cdfc2272cb08358ff793ff3c52075. Feb 13 15:20:48.861349 systemd[1]: Started cri-containerd-3ad7675e545e70bf80853d59b060dee2d82caa5e5d254ff647a40c7333b7d09e.scope - libcontainer container 3ad7675e545e70bf80853d59b060dee2d82caa5e5d254ff647a40c7333b7d09e. Feb 13 15:20:48.884877 systemd[1]: Started cri-containerd-fd081f011253221b492982f0f588cf406c54006f25ed8db918d2b59d60e4a765.scope - libcontainer container fd081f011253221b492982f0f588cf406c54006f25ed8db918d2b59d60e4a765. Feb 13 15:20:48.988269 containerd[1940]: time="2025-02-13T15:20:48.987100511Z" level=info msg="StartContainer for \"c95c5ae0b8155b2722b939f1b6372c06ff7cdfc2272cb08358ff793ff3c52075\" returns successfully" Feb 13 15:20:48.998143 containerd[1940]: time="2025-02-13T15:20:48.998053163Z" level=info msg="StartContainer for \"3ad7675e545e70bf80853d59b060dee2d82caa5e5d254ff647a40c7333b7d09e\" returns successfully" Feb 13 15:20:49.085497 containerd[1940]: time="2025-02-13T15:20:49.085422548Z" level=info msg="StartContainer for \"fd081f011253221b492982f0f588cf406c54006f25ed8db918d2b59d60e4a765\" returns successfully" Feb 13 15:20:49.876686 update_engine[1922]: I20250213 15:20:49.876592 1922 update_attempter.cc:509] Updating boot flags... Feb 13 15:20:50.004674 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3191) Feb 13 15:20:50.372091 kubelet[2901]: I0213 15:20:50.372029 2901 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-163" Feb 13 15:20:50.542827 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3194) Feb 13 15:20:53.234239 kubelet[2901]: I0213 15:20:53.234191 2901 apiserver.go:52] "Watching apiserver" Feb 13 15:20:53.415190 kubelet[2901]: E0213 15:20:53.415060 2901 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-163\" not found" node="ip-172-31-28-163" Feb 13 15:20:53.459111 kubelet[2901]: I0213 15:20:53.459023 2901 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:20:53.599106 kubelet[2901]: I0213 15:20:53.598784 2901 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-28-163" Feb 13 15:20:55.434242 systemd[1]: Reloading requested from client PID 3360 ('systemctl') (unit session-9.scope)... Feb 13 15:20:55.434272 systemd[1]: Reloading... Feb 13 15:20:55.651807 zram_generator::config[3410]: No configuration found. Feb 13 15:20:55.887704 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:20:56.088947 systemd[1]: Reloading finished in 653 ms. Feb 13 15:20:56.165614 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:56.166282 kubelet[2901]: I0213 15:20:56.166127 2901 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:20:56.176066 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:20:56.176656 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:56.176725 systemd[1]: kubelet.service: Consumed 2.007s CPU time, 114.3M memory peak, 0B memory swap peak. Feb 13 15:20:56.184262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:56.511935 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:56.524190 (kubelet)[3461]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:20:56.630585 kubelet[3461]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:20:56.630585 kubelet[3461]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:20:56.630585 kubelet[3461]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:20:56.631226 kubelet[3461]: I0213 15:20:56.630680 3461 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:20:56.639943 kubelet[3461]: I0213 15:20:56.639867 3461 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:20:56.639943 kubelet[3461]: I0213 15:20:56.639929 3461 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:20:56.640862 kubelet[3461]: I0213 15:20:56.640397 3461 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:20:56.643807 kubelet[3461]: I0213 15:20:56.643246 3461 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:20:56.648185 kubelet[3461]: I0213 15:20:56.645781 3461 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:20:56.664960 kubelet[3461]: I0213 15:20:56.664906 3461 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:20:56.668198 kubelet[3461]: I0213 15:20:56.667507 3461 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:20:56.668519 kubelet[3461]: I0213 15:20:56.667615 3461 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-163","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:20:56.669365 kubelet[3461]: I0213 15:20:56.669325 3461 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:20:56.669610 kubelet[3461]: I0213 15:20:56.669566 3461 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:20:56.670281 kubelet[3461]: I0213 15:20:56.669769 3461 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:20:56.670281 kubelet[3461]: I0213 15:20:56.669986 3461 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:20:56.670281 kubelet[3461]: I0213 15:20:56.670012 3461 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:20:56.670281 kubelet[3461]: I0213 15:20:56.670081 3461 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:20:56.670281 kubelet[3461]: I0213 15:20:56.670121 3461 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:20:56.677256 kubelet[3461]: I0213 15:20:56.677200 3461 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:20:56.679502 kubelet[3461]: I0213 15:20:56.679466 3461 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:20:56.681929 kubelet[3461]: I0213 15:20:56.680893 3461 server.go:1264] "Started kubelet" Feb 13 15:20:56.683772 sudo[3475]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:20:56.684489 sudo[3475]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:20:56.690987 kubelet[3461]: I0213 15:20:56.690123 3461 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:20:56.701265 kubelet[3461]: I0213 15:20:56.699807 3461 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:20:56.703001 kubelet[3461]: I0213 15:20:56.702047 3461 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:20:56.707357 kubelet[3461]: I0213 15:20:56.706157 3461 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:20:56.709792 kubelet[3461]: I0213 15:20:56.708763 3461 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:20:56.719395 kubelet[3461]: I0213 15:20:56.719325 3461 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:20:56.727100 kubelet[3461]: I0213 15:20:56.724912 3461 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:20:56.727100 kubelet[3461]: I0213 15:20:56.725311 3461 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:20:56.729929 kubelet[3461]: I0213 15:20:56.729763 3461 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:20:56.730750 kubelet[3461]: I0213 15:20:56.730017 3461 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:20:56.782746 kubelet[3461]: I0213 15:20:56.781645 3461 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:20:56.806670 kubelet[3461]: I0213 15:20:56.806513 3461 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:20:56.834882 kubelet[3461]: I0213 15:20:56.831870 3461 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:20:56.834882 kubelet[3461]: I0213 15:20:56.834085 3461 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:20:56.834882 kubelet[3461]: I0213 15:20:56.834156 3461 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:20:56.834882 kubelet[3461]: E0213 15:20:56.834264 3461 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:20:56.870760 kubelet[3461]: I0213 15:20:56.866611 3461 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-163" Feb 13 15:20:56.907524 kubelet[3461]: I0213 15:20:56.907464 3461 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-28-163" Feb 13 15:20:56.909842 kubelet[3461]: I0213 15:20:56.909763 3461 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-28-163" Feb 13 15:20:56.934857 kubelet[3461]: E0213 15:20:56.934672 3461 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:20:57.019604 kubelet[3461]: I0213 15:20:57.019338 3461 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:20:57.019604 kubelet[3461]: I0213 15:20:57.019384 3461 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:20:57.019604 kubelet[3461]: I0213 15:20:57.019458 3461 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:20:57.021771 kubelet[3461]: I0213 15:20:57.020816 3461 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:20:57.021771 kubelet[3461]: I0213 15:20:57.020851 3461 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:20:57.021771 kubelet[3461]: I0213 15:20:57.020890 3461 policy_none.go:49] "None policy: Start" Feb 13 15:20:57.024618 kubelet[3461]: I0213 15:20:57.023879 3461 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:20:57.024618 kubelet[3461]: I0213 15:20:57.023945 3461 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:20:57.025513 kubelet[3461]: I0213 15:20:57.025424 3461 state_mem.go:75] "Updated machine memory state" Feb 13 15:20:57.040808 kubelet[3461]: I0213 15:20:57.040490 3461 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:20:57.041792 kubelet[3461]: I0213 15:20:57.041689 3461 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:20:57.043978 kubelet[3461]: I0213 15:20:57.043909 3461 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:20:57.135529 kubelet[3461]: I0213 15:20:57.135252 3461 topology_manager.go:215] "Topology Admit Handler" podUID="c2ac3049b59e014d12a8b898cf95aaac" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-28-163" Feb 13 15:20:57.135529 kubelet[3461]: I0213 15:20:57.135426 3461 topology_manager.go:215] "Topology Admit Handler" podUID="6630366fd57a324ea231719c822ca9ff" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-28-163" Feb 13 15:20:57.135529 kubelet[3461]: I0213 15:20:57.135517 3461 topology_manager.go:215] "Topology Admit Handler" podUID="e89d88a02bed9fc387d96b661ec04439" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-28-163" Feb 13 15:20:57.147107 kubelet[3461]: E0213 15:20:57.146929 3461 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-28-163\" already exists" pod="kube-system/kube-scheduler-ip-172-31-28-163" Feb 13 15:20:57.231700 kubelet[3461]: I0213 15:20:57.231478 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2ac3049b59e014d12a8b898cf95aaac-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-163\" (UID: \"c2ac3049b59e014d12a8b898cf95aaac\") " pod="kube-system/kube-apiserver-ip-172-31-28-163" Feb 13 15:20:57.232512 kubelet[3461]: I0213 15:20:57.232060 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6630366fd57a324ea231719c822ca9ff-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-163\" (UID: \"6630366fd57a324ea231719c822ca9ff\") " pod="kube-system/kube-controller-manager-ip-172-31-28-163" Feb 13 15:20:57.232512 kubelet[3461]: I0213 15:20:57.232196 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e89d88a02bed9fc387d96b661ec04439-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-163\" (UID: \"e89d88a02bed9fc387d96b661ec04439\") " pod="kube-system/kube-scheduler-ip-172-31-28-163" Feb 13 15:20:57.232512 kubelet[3461]: I0213 15:20:57.232278 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6630366fd57a324ea231719c822ca9ff-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-163\" (UID: \"6630366fd57a324ea231719c822ca9ff\") " pod="kube-system/kube-controller-manager-ip-172-31-28-163" Feb 13 15:20:57.233103 kubelet[3461]: I0213 15:20:57.232830 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2ac3049b59e014d12a8b898cf95aaac-ca-certs\") pod \"kube-apiserver-ip-172-31-28-163\" (UID: \"c2ac3049b59e014d12a8b898cf95aaac\") " pod="kube-system/kube-apiserver-ip-172-31-28-163" Feb 13 15:20:57.233648 kubelet[3461]: I0213 15:20:57.233197 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2ac3049b59e014d12a8b898cf95aaac-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-163\" (UID: \"c2ac3049b59e014d12a8b898cf95aaac\") " pod="kube-system/kube-apiserver-ip-172-31-28-163" Feb 13 15:20:57.233648 kubelet[3461]: I0213 15:20:57.233304 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6630366fd57a324ea231719c822ca9ff-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-163\" (UID: \"6630366fd57a324ea231719c822ca9ff\") " pod="kube-system/kube-controller-manager-ip-172-31-28-163" Feb 13 15:20:57.233648 kubelet[3461]: I0213 15:20:57.233342 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6630366fd57a324ea231719c822ca9ff-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-163\" (UID: \"6630366fd57a324ea231719c822ca9ff\") " pod="kube-system/kube-controller-manager-ip-172-31-28-163" Feb 13 15:20:57.234108 kubelet[3461]: I0213 15:20:57.234014 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6630366fd57a324ea231719c822ca9ff-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-163\" (UID: \"6630366fd57a324ea231719c822ca9ff\") " pod="kube-system/kube-controller-manager-ip-172-31-28-163" Feb 13 15:20:57.689083 kubelet[3461]: I0213 15:20:57.689004 3461 apiserver.go:52] "Watching apiserver" Feb 13 15:20:57.726164 kubelet[3461]: I0213 15:20:57.726075 3461 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:20:57.824496 sudo[3475]: pam_unix(sudo:session): session closed for user root Feb 13 15:20:58.007808 kubelet[3461]: I0213 15:20:58.007644 3461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-163" podStartSLOduration=4.007617604 podStartE2EDuration="4.007617604s" podCreationTimestamp="2025-02-13 15:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:20:57.993821552 +0000 UTC m=+1.461718496" watchObservedRunningTime="2025-02-13 15:20:58.007617604 +0000 UTC m=+1.475514536" Feb 13 15:20:58.008759 kubelet[3461]: I0213 15:20:58.008508 3461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-163" podStartSLOduration=1.008486152 podStartE2EDuration="1.008486152s" podCreationTimestamp="2025-02-13 15:20:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:20:58.008482312 +0000 UTC m=+1.476379232" watchObservedRunningTime="2025-02-13 15:20:58.008486152 +0000 UTC m=+1.476383096" Feb 13 15:20:58.125522 kubelet[3461]: I0213 15:20:58.125276 3461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-163" podStartSLOduration=1.125251577 podStartE2EDuration="1.125251577s" podCreationTimestamp="2025-02-13 15:20:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:20:58.071282669 +0000 UTC m=+1.539179697" watchObservedRunningTime="2025-02-13 15:20:58.125251577 +0000 UTC m=+1.593148509" Feb 13 15:21:00.919477 sudo[2295]: pam_unix(sudo:session): session closed for user root Feb 13 15:21:00.942269 sshd[2294]: Connection closed by 147.75.109.163 port 36060 Feb 13 15:21:00.943420 sshd-session[2292]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:00.948727 systemd[1]: sshd@8-172.31.28.163:22-147.75.109.163:36060.service: Deactivated successfully. Feb 13 15:21:00.952308 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:21:00.952905 systemd[1]: session-9.scope: Consumed 13.392s CPU time, 187.8M memory peak, 0B memory swap peak. Feb 13 15:21:00.955931 systemd-logind[1921]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:21:00.959050 systemd-logind[1921]: Removed session 9. Feb 13 15:21:10.777581 kubelet[3461]: I0213 15:21:10.777419 3461 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:21:10.780326 containerd[1940]: time="2025-02-13T15:21:10.780241796Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:21:10.781759 kubelet[3461]: I0213 15:21:10.780660 3461 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:21:11.660599 kubelet[3461]: I0213 15:21:11.658994 3461 topology_manager.go:215] "Topology Admit Handler" podUID="343abac5-121a-423b-af61-036647b7051e" podNamespace="kube-system" podName="kube-proxy-8jtlt" Feb 13 15:21:11.674605 kubelet[3461]: I0213 15:21:11.672390 3461 topology_manager.go:215] "Topology Admit Handler" podUID="715204ab-cf39-4ea0-b1e5-71a69f9b7212" podNamespace="kube-system" podName="cilium-cnqfh" Feb 13 15:21:11.686729 systemd[1]: Created slice kubepods-besteffort-pod343abac5_121a_423b_af61_036647b7051e.slice - libcontainer container kubepods-besteffort-pod343abac5_121a_423b_af61_036647b7051e.slice. Feb 13 15:21:11.719895 systemd[1]: Created slice kubepods-burstable-pod715204ab_cf39_4ea0_b1e5_71a69f9b7212.slice - libcontainer container kubepods-burstable-pod715204ab_cf39_4ea0_b1e5_71a69f9b7212.slice. Feb 13 15:21:11.730771 kubelet[3461]: I0213 15:21:11.730631 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/343abac5-121a-423b-af61-036647b7051e-xtables-lock\") pod \"kube-proxy-8jtlt\" (UID: \"343abac5-121a-423b-af61-036647b7051e\") " pod="kube-system/kube-proxy-8jtlt" Feb 13 15:21:11.730771 kubelet[3461]: I0213 15:21:11.730787 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-cilium-run\") pod \"cilium-cnqfh\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " pod="kube-system/cilium-cnqfh" Feb 13 15:21:11.731304 kubelet[3461]: I0213 15:21:11.730854 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-host-proc-sys-kernel\") pod \"cilium-cnqfh\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " pod="kube-system/cilium-cnqfh" Feb 13 15:21:11.731304 kubelet[3461]: I0213 15:21:11.730910 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/715204ab-cf39-4ea0-b1e5-71a69f9b7212-hubble-tls\") pod \"cilium-cnqfh\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " pod="kube-system/cilium-cnqfh" Feb 13 15:21:11.731304 kubelet[3461]: I0213 15:21:11.730998 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w26td\" (UniqueName: \"kubernetes.io/projected/343abac5-121a-423b-af61-036647b7051e-kube-api-access-w26td\") pod \"kube-proxy-8jtlt\" (UID: \"343abac5-121a-423b-af61-036647b7051e\") " pod="kube-system/kube-proxy-8jtlt" Feb 13 15:21:11.731304 kubelet[3461]: I0213 15:21:11.731048 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-cni-path\") pod \"cilium-cnqfh\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " pod="kube-system/cilium-cnqfh" Feb 13 15:21:11.731304 kubelet[3461]: I0213 15:21:11.731084 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-xtables-lock\") pod \"cilium-cnqfh\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " pod="kube-system/cilium-cnqfh" Feb 13 15:21:11.732163 kubelet[3461]: I0213 15:21:11.731713 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/343abac5-121a-423b-af61-036647b7051e-kube-proxy\") pod \"kube-proxy-8jtlt\" (UID: \"343abac5-121a-423b-af61-036647b7051e\") " pod="kube-system/kube-proxy-8jtlt" Feb 13 15:21:11.732163 kubelet[3461]: I0213 15:21:11.731911 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-hostproc\") pod \"cilium-cnqfh\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " pod="kube-system/cilium-cnqfh" Feb 13 15:21:11.732163 kubelet[3461]: I0213 15:21:11.732013 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2twbb\" (UniqueName: \"kubernetes.io/projected/715204ab-cf39-4ea0-b1e5-71a69f9b7212-kube-api-access-2twbb\") pod \"cilium-cnqfh\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " pod="kube-system/cilium-cnqfh" Feb 13 15:21:11.732931 kubelet[3461]: I0213 15:21:11.732113 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/715204ab-cf39-4ea0-b1e5-71a69f9b7212-cilium-config-path\") pod \"cilium-cnqfh\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " pod="kube-system/cilium-cnqfh" Feb 13 15:21:11.732931 kubelet[3461]: I0213 15:21:11.732704 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-host-proc-sys-net\") pod \"cilium-cnqfh\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " pod="kube-system/cilium-cnqfh" Feb 13 15:21:11.732931 kubelet[3461]: I0213 15:21:11.732778 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-lib-modules\") pod \"cilium-cnqfh\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " pod="kube-system/cilium-cnqfh" Feb 13 15:21:11.732931 kubelet[3461]: I0213 15:21:11.732831 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-bpf-maps\") pod \"cilium-cnqfh\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " pod="kube-system/cilium-cnqfh" Feb 13 15:21:11.733685 kubelet[3461]: I0213 15:21:11.733327 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-cilium-cgroup\") pod \"cilium-cnqfh\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " pod="kube-system/cilium-cnqfh" Feb 13 15:21:11.733685 kubelet[3461]: I0213 15:21:11.733410 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/343abac5-121a-423b-af61-036647b7051e-lib-modules\") pod \"kube-proxy-8jtlt\" (UID: \"343abac5-121a-423b-af61-036647b7051e\") " pod="kube-system/kube-proxy-8jtlt" Feb 13 15:21:11.733685 kubelet[3461]: I0213 15:21:11.733488 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-etc-cni-netd\") pod \"cilium-cnqfh\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " pod="kube-system/cilium-cnqfh" Feb 13 15:21:11.734047 kubelet[3461]: I0213 15:21:11.733619 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/715204ab-cf39-4ea0-b1e5-71a69f9b7212-clustermesh-secrets\") pod \"cilium-cnqfh\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " pod="kube-system/cilium-cnqfh" Feb 13 15:21:12.009109 containerd[1940]: time="2025-02-13T15:21:12.008920686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8jtlt,Uid:343abac5-121a-423b-af61-036647b7051e,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:12.035443 containerd[1940]: time="2025-02-13T15:21:12.033909078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cnqfh,Uid:715204ab-cf39-4ea0-b1e5-71a69f9b7212,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:12.112424 kubelet[3461]: I0213 15:21:12.111093 3461 topology_manager.go:215] "Topology Admit Handler" podUID="c4b23637-8634-482c-9df5-2c243302b0a3" podNamespace="kube-system" podName="cilium-operator-599987898-rpmnp" Feb 13 15:21:12.136190 kubelet[3461]: I0213 15:21:12.135954 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5n4g\" (UniqueName: \"kubernetes.io/projected/c4b23637-8634-482c-9df5-2c243302b0a3-kube-api-access-b5n4g\") pod \"cilium-operator-599987898-rpmnp\" (UID: \"c4b23637-8634-482c-9df5-2c243302b0a3\") " pod="kube-system/cilium-operator-599987898-rpmnp" Feb 13 15:21:12.136190 kubelet[3461]: I0213 15:21:12.136057 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4b23637-8634-482c-9df5-2c243302b0a3-cilium-config-path\") pod \"cilium-operator-599987898-rpmnp\" (UID: \"c4b23637-8634-482c-9df5-2c243302b0a3\") " pod="kube-system/cilium-operator-599987898-rpmnp" Feb 13 15:21:12.142532 systemd[1]: Created slice kubepods-besteffort-podc4b23637_8634_482c_9df5_2c243302b0a3.slice - libcontainer container kubepods-besteffort-podc4b23637_8634_482c_9df5_2c243302b0a3.slice. Feb 13 15:21:12.162849 containerd[1940]: time="2025-02-13T15:21:12.162079735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:12.162849 containerd[1940]: time="2025-02-13T15:21:12.162195499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:12.162849 containerd[1940]: time="2025-02-13T15:21:12.162228019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:12.162849 containerd[1940]: time="2025-02-13T15:21:12.162430783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:12.174872 containerd[1940]: time="2025-02-13T15:21:12.173606419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:12.180243 containerd[1940]: time="2025-02-13T15:21:12.179718691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:12.180243 containerd[1940]: time="2025-02-13T15:21:12.179791267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:12.181077 containerd[1940]: time="2025-02-13T15:21:12.180780187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:12.247210 systemd[1]: Started cri-containerd-2b628d744c539a14c180c2ba13dddb4184c8ac0e908f2b502d9537312976ec13.scope - libcontainer container 2b628d744c539a14c180c2ba13dddb4184c8ac0e908f2b502d9537312976ec13. Feb 13 15:21:12.266220 systemd[1]: Started cri-containerd-2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79.scope - libcontainer container 2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79. Feb 13 15:21:12.392983 containerd[1940]: time="2025-02-13T15:21:12.392859860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cnqfh,Uid:715204ab-cf39-4ea0-b1e5-71a69f9b7212,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\"" Feb 13 15:21:12.400699 containerd[1940]: time="2025-02-13T15:21:12.399520112Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:21:12.405043 containerd[1940]: time="2025-02-13T15:21:12.404801084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8jtlt,Uid:343abac5-121a-423b-af61-036647b7051e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b628d744c539a14c180c2ba13dddb4184c8ac0e908f2b502d9537312976ec13\"" Feb 13 15:21:12.418465 containerd[1940]: time="2025-02-13T15:21:12.418113668Z" level=info msg="CreateContainer within sandbox \"2b628d744c539a14c180c2ba13dddb4184c8ac0e908f2b502d9537312976ec13\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:21:12.454908 containerd[1940]: time="2025-02-13T15:21:12.454839008Z" level=info msg="CreateContainer within sandbox \"2b628d744c539a14c180c2ba13dddb4184c8ac0e908f2b502d9537312976ec13\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"16db613f6d8de70ed73ccffeeb49f009cf0b271b77a9693d28d5639f979c7d7b\"" Feb 13 15:21:12.457536 containerd[1940]: time="2025-02-13T15:21:12.457458488Z" level=info msg="StartContainer for \"16db613f6d8de70ed73ccffeeb49f009cf0b271b77a9693d28d5639f979c7d7b\"" Feb 13 15:21:12.460401 containerd[1940]: time="2025-02-13T15:21:12.459762728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-rpmnp,Uid:c4b23637-8634-482c-9df5-2c243302b0a3,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:12.529151 systemd[1]: Started cri-containerd-16db613f6d8de70ed73ccffeeb49f009cf0b271b77a9693d28d5639f979c7d7b.scope - libcontainer container 16db613f6d8de70ed73ccffeeb49f009cf0b271b77a9693d28d5639f979c7d7b. Feb 13 15:21:12.554330 containerd[1940]: time="2025-02-13T15:21:12.553654016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:12.554770 containerd[1940]: time="2025-02-13T15:21:12.554632700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:12.554770 containerd[1940]: time="2025-02-13T15:21:12.554722856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:12.555659 containerd[1940]: time="2025-02-13T15:21:12.555398876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:12.594893 systemd[1]: Started cri-containerd-353d2a36b36e08459a22afa7e4a7e3b76c86908cf8ddbebc6249087aed9c36c5.scope - libcontainer container 353d2a36b36e08459a22afa7e4a7e3b76c86908cf8ddbebc6249087aed9c36c5. Feb 13 15:21:12.653024 containerd[1940]: time="2025-02-13T15:21:12.652949037Z" level=info msg="StartContainer for \"16db613f6d8de70ed73ccffeeb49f009cf0b271b77a9693d28d5639f979c7d7b\" returns successfully" Feb 13 15:21:12.690733 containerd[1940]: time="2025-02-13T15:21:12.690520497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-rpmnp,Uid:c4b23637-8634-482c-9df5-2c243302b0a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"353d2a36b36e08459a22afa7e4a7e3b76c86908cf8ddbebc6249087aed9c36c5\"" Feb 13 15:21:13.031147 kubelet[3461]: I0213 15:21:13.031047 3461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8jtlt" podStartSLOduration=2.031020847 podStartE2EDuration="2.031020847s" podCreationTimestamp="2025-02-13 15:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:21:13.030519847 +0000 UTC m=+16.498416791" watchObservedRunningTime="2025-02-13 15:21:13.031020847 +0000 UTC m=+16.498917767" Feb 13 15:21:19.306576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4017620255.mount: Deactivated successfully. Feb 13 15:21:22.192642 containerd[1940]: time="2025-02-13T15:21:22.192529156Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:22.194540 containerd[1940]: time="2025-02-13T15:21:22.194453812Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 15:21:22.196533 containerd[1940]: time="2025-02-13T15:21:22.196421728Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:22.202081 containerd[1940]: time="2025-02-13T15:21:22.201914368Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.802255092s" Feb 13 15:21:22.202081 containerd[1940]: time="2025-02-13T15:21:22.201978496Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 15:21:22.205061 containerd[1940]: time="2025-02-13T15:21:22.204728956Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:21:22.208881 containerd[1940]: time="2025-02-13T15:21:22.208183960Z" level=info msg="CreateContainer within sandbox \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:21:22.230269 containerd[1940]: time="2025-02-13T15:21:22.230098637Z" level=info msg="CreateContainer within sandbox \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cc489fbfa0ee53ca9c944b84cc261dde070404cafa27679138e66e148fd415ff\"" Feb 13 15:21:22.232676 containerd[1940]: time="2025-02-13T15:21:22.232268633Z" level=info msg="StartContainer for \"cc489fbfa0ee53ca9c944b84cc261dde070404cafa27679138e66e148fd415ff\"" Feb 13 15:21:22.291923 systemd[1]: Started cri-containerd-cc489fbfa0ee53ca9c944b84cc261dde070404cafa27679138e66e148fd415ff.scope - libcontainer container cc489fbfa0ee53ca9c944b84cc261dde070404cafa27679138e66e148fd415ff. Feb 13 15:21:22.343925 containerd[1940]: time="2025-02-13T15:21:22.343831937Z" level=info msg="StartContainer for \"cc489fbfa0ee53ca9c944b84cc261dde070404cafa27679138e66e148fd415ff\" returns successfully" Feb 13 15:21:22.363271 systemd[1]: cri-containerd-cc489fbfa0ee53ca9c944b84cc261dde070404cafa27679138e66e148fd415ff.scope: Deactivated successfully. Feb 13 15:21:23.222274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc489fbfa0ee53ca9c944b84cc261dde070404cafa27679138e66e148fd415ff-rootfs.mount: Deactivated successfully. Feb 13 15:21:23.412247 containerd[1940]: time="2025-02-13T15:21:23.412138938Z" level=info msg="shim disconnected" id=cc489fbfa0ee53ca9c944b84cc261dde070404cafa27679138e66e148fd415ff namespace=k8s.io Feb 13 15:21:23.412247 containerd[1940]: time="2025-02-13T15:21:23.412215786Z" level=warning msg="cleaning up after shim disconnected" id=cc489fbfa0ee53ca9c944b84cc261dde070404cafa27679138e66e148fd415ff namespace=k8s.io Feb 13 15:21:23.412247 containerd[1940]: time="2025-02-13T15:21:23.412237182Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:21:24.055288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount381847159.mount: Deactivated successfully. Feb 13 15:21:24.072330 containerd[1940]: time="2025-02-13T15:21:24.071211486Z" level=info msg="CreateContainer within sandbox \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:21:24.101781 containerd[1940]: time="2025-02-13T15:21:24.101615910Z" level=info msg="CreateContainer within sandbox \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"591b351a090754d3ddcbc17d44c7cad5ca101426712bb94fc3207f750d0a1d50\"" Feb 13 15:21:24.106982 containerd[1940]: time="2025-02-13T15:21:24.106834770Z" level=info msg="StartContainer for \"591b351a090754d3ddcbc17d44c7cad5ca101426712bb94fc3207f750d0a1d50\"" Feb 13 15:21:24.184941 systemd[1]: Started cri-containerd-591b351a090754d3ddcbc17d44c7cad5ca101426712bb94fc3207f750d0a1d50.scope - libcontainer container 591b351a090754d3ddcbc17d44c7cad5ca101426712bb94fc3207f750d0a1d50. Feb 13 15:21:24.267009 containerd[1940]: time="2025-02-13T15:21:24.266926315Z" level=info msg="StartContainer for \"591b351a090754d3ddcbc17d44c7cad5ca101426712bb94fc3207f750d0a1d50\" returns successfully" Feb 13 15:21:24.299961 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:21:24.300464 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:21:24.300672 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:21:24.314503 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:21:24.318183 systemd[1]: cri-containerd-591b351a090754d3ddcbc17d44c7cad5ca101426712bb94fc3207f750d0a1d50.scope: Deactivated successfully. Feb 13 15:21:24.373697 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:21:24.410847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-591b351a090754d3ddcbc17d44c7cad5ca101426712bb94fc3207f750d0a1d50-rootfs.mount: Deactivated successfully. Feb 13 15:21:24.434185 containerd[1940]: time="2025-02-13T15:21:24.433538815Z" level=info msg="shim disconnected" id=591b351a090754d3ddcbc17d44c7cad5ca101426712bb94fc3207f750d0a1d50 namespace=k8s.io Feb 13 15:21:24.434185 containerd[1940]: time="2025-02-13T15:21:24.433901815Z" level=warning msg="cleaning up after shim disconnected" id=591b351a090754d3ddcbc17d44c7cad5ca101426712bb94fc3207f750d0a1d50 namespace=k8s.io Feb 13 15:21:24.434185 containerd[1940]: time="2025-02-13T15:21:24.433926091Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:21:24.938858 containerd[1940]: time="2025-02-13T15:21:24.938785846Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:24.940618 containerd[1940]: time="2025-02-13T15:21:24.940454818Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 15:21:24.941527 containerd[1940]: time="2025-02-13T15:21:24.941309386Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:24.946989 containerd[1940]: time="2025-02-13T15:21:24.946110814Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.741310302s" Feb 13 15:21:24.946989 containerd[1940]: time="2025-02-13T15:21:24.946206130Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 15:21:24.954255 containerd[1940]: time="2025-02-13T15:21:24.953867194Z" level=info msg="CreateContainer within sandbox \"353d2a36b36e08459a22afa7e4a7e3b76c86908cf8ddbebc6249087aed9c36c5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:21:24.983007 containerd[1940]: time="2025-02-13T15:21:24.982881490Z" level=info msg="CreateContainer within sandbox \"353d2a36b36e08459a22afa7e4a7e3b76c86908cf8ddbebc6249087aed9c36c5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707\"" Feb 13 15:21:24.985624 containerd[1940]: time="2025-02-13T15:21:24.984226390Z" level=info msg="StartContainer for \"ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707\"" Feb 13 15:21:25.047928 systemd[1]: Started cri-containerd-ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707.scope - libcontainer container ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707. Feb 13 15:21:25.100628 containerd[1940]: time="2025-02-13T15:21:25.099339535Z" level=info msg="CreateContainer within sandbox \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:21:25.157595 containerd[1940]: time="2025-02-13T15:21:25.156994579Z" level=info msg="CreateContainer within sandbox \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3425309e94071e2a60e7ad9074d3d17d80cc948f487611f54d1d0b79daa2a24b\"" Feb 13 15:21:25.169934 containerd[1940]: time="2025-02-13T15:21:25.169709611Z" level=info msg="StartContainer for \"3425309e94071e2a60e7ad9074d3d17d80cc948f487611f54d1d0b79daa2a24b\"" Feb 13 15:21:25.178757 containerd[1940]: time="2025-02-13T15:21:25.177466099Z" level=info msg="StartContainer for \"ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707\" returns successfully" Feb 13 15:21:25.262888 systemd[1]: Started cri-containerd-3425309e94071e2a60e7ad9074d3d17d80cc948f487611f54d1d0b79daa2a24b.scope - libcontainer container 3425309e94071e2a60e7ad9074d3d17d80cc948f487611f54d1d0b79daa2a24b. Feb 13 15:21:25.340481 containerd[1940]: time="2025-02-13T15:21:25.339357176Z" level=info msg="StartContainer for \"3425309e94071e2a60e7ad9074d3d17d80cc948f487611f54d1d0b79daa2a24b\" returns successfully" Feb 13 15:21:25.349895 systemd[1]: cri-containerd-3425309e94071e2a60e7ad9074d3d17d80cc948f487611f54d1d0b79daa2a24b.scope: Deactivated successfully. Feb 13 15:21:25.413953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3425309e94071e2a60e7ad9074d3d17d80cc948f487611f54d1d0b79daa2a24b-rootfs.mount: Deactivated successfully. Feb 13 15:21:25.536069 containerd[1940]: time="2025-02-13T15:21:25.535514445Z" level=info msg="shim disconnected" id=3425309e94071e2a60e7ad9074d3d17d80cc948f487611f54d1d0b79daa2a24b namespace=k8s.io Feb 13 15:21:25.536069 containerd[1940]: time="2025-02-13T15:21:25.535880649Z" level=warning msg="cleaning up after shim disconnected" id=3425309e94071e2a60e7ad9074d3d17d80cc948f487611f54d1d0b79daa2a24b namespace=k8s.io Feb 13 15:21:25.536069 containerd[1940]: time="2025-02-13T15:21:25.535910409Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:21:26.111714 containerd[1940]: time="2025-02-13T15:21:26.111637616Z" level=info msg="CreateContainer within sandbox \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:21:26.155781 containerd[1940]: time="2025-02-13T15:21:26.155702276Z" level=info msg="CreateContainer within sandbox \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"45efbda54661a4deb1c8728e3bbdfadb8c4e8ffbfa53834d221ab3cd33f0a7af\"" Feb 13 15:21:26.162826 containerd[1940]: time="2025-02-13T15:21:26.160300004Z" level=info msg="StartContainer for \"45efbda54661a4deb1c8728e3bbdfadb8c4e8ffbfa53834d221ab3cd33f0a7af\"" Feb 13 15:21:26.227630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2528282413.mount: Deactivated successfully. Feb 13 15:21:26.244878 systemd[1]: Started cri-containerd-45efbda54661a4deb1c8728e3bbdfadb8c4e8ffbfa53834d221ab3cd33f0a7af.scope - libcontainer container 45efbda54661a4deb1c8728e3bbdfadb8c4e8ffbfa53834d221ab3cd33f0a7af. Feb 13 15:21:26.273860 kubelet[3461]: I0213 15:21:26.273763 3461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-rpmnp" podStartSLOduration=3.019048112 podStartE2EDuration="15.273742641s" podCreationTimestamp="2025-02-13 15:21:11 +0000 UTC" firstStartedPulling="2025-02-13 15:21:12.693761601 +0000 UTC m=+16.161658521" lastFinishedPulling="2025-02-13 15:21:24.94845613 +0000 UTC m=+28.416353050" observedRunningTime="2025-02-13 15:21:26.179079716 +0000 UTC m=+29.646976648" watchObservedRunningTime="2025-02-13 15:21:26.273742641 +0000 UTC m=+29.741639609" Feb 13 15:21:26.339381 containerd[1940]: time="2025-02-13T15:21:26.339313389Z" level=info msg="StartContainer for \"45efbda54661a4deb1c8728e3bbdfadb8c4e8ffbfa53834d221ab3cd33f0a7af\" returns successfully" Feb 13 15:21:26.342031 systemd[1]: cri-containerd-45efbda54661a4deb1c8728e3bbdfadb8c4e8ffbfa53834d221ab3cd33f0a7af.scope: Deactivated successfully. Feb 13 15:21:26.406445 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45efbda54661a4deb1c8728e3bbdfadb8c4e8ffbfa53834d221ab3cd33f0a7af-rootfs.mount: Deactivated successfully. Feb 13 15:21:26.413492 containerd[1940]: time="2025-02-13T15:21:26.413291241Z" level=info msg="shim disconnected" id=45efbda54661a4deb1c8728e3bbdfadb8c4e8ffbfa53834d221ab3cd33f0a7af namespace=k8s.io Feb 13 15:21:26.414213 containerd[1940]: time="2025-02-13T15:21:26.413905053Z" level=warning msg="cleaning up after shim disconnected" id=45efbda54661a4deb1c8728e3bbdfadb8c4e8ffbfa53834d221ab3cd33f0a7af namespace=k8s.io Feb 13 15:21:26.414213 containerd[1940]: time="2025-02-13T15:21:26.413967969Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:21:27.124942 containerd[1940]: time="2025-02-13T15:21:27.124872393Z" level=info msg="CreateContainer within sandbox \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:21:27.184510 containerd[1940]: time="2025-02-13T15:21:27.180878949Z" level=info msg="CreateContainer within sandbox \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf\"" Feb 13 15:21:27.184510 containerd[1940]: time="2025-02-13T15:21:27.182848221Z" level=info msg="StartContainer for \"91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf\"" Feb 13 15:21:27.334936 systemd[1]: Started cri-containerd-91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf.scope - libcontainer container 91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf. Feb 13 15:21:27.429412 containerd[1940]: time="2025-02-13T15:21:27.429330298Z" level=info msg="StartContainer for \"91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf\" returns successfully" Feb 13 15:21:27.591714 kubelet[3461]: I0213 15:21:27.591626 3461 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:21:27.666946 kubelet[3461]: I0213 15:21:27.665918 3461 topology_manager.go:215] "Topology Admit Handler" podUID="69b4169b-ebcb-413a-9cf8-8036752a5e9e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kbk7g" Feb 13 15:21:27.691599 kubelet[3461]: I0213 15:21:27.691206 3461 topology_manager.go:215] "Topology Admit Handler" podUID="fd527888-ab9e-415c-8d98-e537605e5b3f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hbbgs" Feb 13 15:21:27.694849 systemd[1]: Created slice kubepods-burstable-pod69b4169b_ebcb_413a_9cf8_8036752a5e9e.slice - libcontainer container kubepods-burstable-pod69b4169b_ebcb_413a_9cf8_8036752a5e9e.slice. Feb 13 15:21:27.725672 systemd[1]: Created slice kubepods-burstable-podfd527888_ab9e_415c_8d98_e537605e5b3f.slice - libcontainer container kubepods-burstable-podfd527888_ab9e_415c_8d98_e537605e5b3f.slice. Feb 13 15:21:27.754535 kubelet[3461]: I0213 15:21:27.753666 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd527888-ab9e-415c-8d98-e537605e5b3f-config-volume\") pod \"coredns-7db6d8ff4d-hbbgs\" (UID: \"fd527888-ab9e-415c-8d98-e537605e5b3f\") " pod="kube-system/coredns-7db6d8ff4d-hbbgs" Feb 13 15:21:27.754535 kubelet[3461]: I0213 15:21:27.753741 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69b4169b-ebcb-413a-9cf8-8036752a5e9e-config-volume\") pod \"coredns-7db6d8ff4d-kbk7g\" (UID: \"69b4169b-ebcb-413a-9cf8-8036752a5e9e\") " pod="kube-system/coredns-7db6d8ff4d-kbk7g" Feb 13 15:21:27.754535 kubelet[3461]: I0213 15:21:27.753790 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvnjj\" (UniqueName: \"kubernetes.io/projected/fd527888-ab9e-415c-8d98-e537605e5b3f-kube-api-access-mvnjj\") pod \"coredns-7db6d8ff4d-hbbgs\" (UID: \"fd527888-ab9e-415c-8d98-e537605e5b3f\") " pod="kube-system/coredns-7db6d8ff4d-hbbgs" Feb 13 15:21:27.754535 kubelet[3461]: I0213 15:21:27.753852 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4fzf\" (UniqueName: \"kubernetes.io/projected/69b4169b-ebcb-413a-9cf8-8036752a5e9e-kube-api-access-s4fzf\") pod \"coredns-7db6d8ff4d-kbk7g\" (UID: \"69b4169b-ebcb-413a-9cf8-8036752a5e9e\") " pod="kube-system/coredns-7db6d8ff4d-kbk7g" Feb 13 15:21:28.005883 containerd[1940]: time="2025-02-13T15:21:28.005720157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kbk7g,Uid:69b4169b-ebcb-413a-9cf8-8036752a5e9e,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:28.036795 containerd[1940]: time="2025-02-13T15:21:28.036639573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hbbgs,Uid:fd527888-ab9e-415c-8d98-e537605e5b3f,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:30.524105 systemd-networkd[1836]: cilium_host: Link UP Feb 13 15:21:30.524965 (udev-worker)[4252]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:21:30.526309 systemd-networkd[1836]: cilium_net: Link UP Feb 13 15:21:30.526996 systemd-networkd[1836]: cilium_net: Gained carrier Feb 13 15:21:30.527380 systemd-networkd[1836]: cilium_host: Gained carrier Feb 13 15:21:30.534673 (udev-worker)[4291]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:21:30.729526 systemd-networkd[1836]: cilium_vxlan: Link UP Feb 13 15:21:30.729540 systemd-networkd[1836]: cilium_vxlan: Gained carrier Feb 13 15:21:30.858768 systemd-networkd[1836]: cilium_net: Gained IPv6LL Feb 13 15:21:31.280596 kernel: NET: Registered PF_ALG protocol family Feb 13 15:21:31.330724 systemd-networkd[1836]: cilium_host: Gained IPv6LL Feb 13 15:21:32.675011 systemd-networkd[1836]: cilium_vxlan: Gained IPv6LL Feb 13 15:21:32.839322 systemd-networkd[1836]: lxc_health: Link UP Feb 13 15:21:32.846163 (udev-worker)[4304]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:21:32.852998 systemd-networkd[1836]: lxc_health: Gained carrier Feb 13 15:21:33.133218 systemd-networkd[1836]: lxc281ce85812b1: Link UP Feb 13 15:21:33.140768 kernel: eth0: renamed from tmp9a9b4 Feb 13 15:21:33.148496 systemd-networkd[1836]: lxc281ce85812b1: Gained carrier Feb 13 15:21:33.595079 systemd-networkd[1836]: lxc91f4e2f91095: Link UP Feb 13 15:21:33.604610 kernel: eth0: renamed from tmpc37bc Feb 13 15:21:33.615508 systemd-networkd[1836]: lxc91f4e2f91095: Gained carrier Feb 13 15:21:34.089872 kubelet[3461]: I0213 15:21:34.089696 3461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cnqfh" podStartSLOduration=13.283142551 podStartE2EDuration="23.089626551s" podCreationTimestamp="2025-02-13 15:21:11 +0000 UTC" firstStartedPulling="2025-02-13 15:21:12.396745436 +0000 UTC m=+15.864642356" lastFinishedPulling="2025-02-13 15:21:22.203229436 +0000 UTC m=+25.671126356" observedRunningTime="2025-02-13 15:21:28.184415422 +0000 UTC m=+31.652312354" watchObservedRunningTime="2025-02-13 15:21:34.089626551 +0000 UTC m=+37.557523471" Feb 13 15:21:34.594847 systemd-networkd[1836]: lxc_health: Gained IPv6LL Feb 13 15:21:34.786842 systemd-networkd[1836]: lxc91f4e2f91095: Gained IPv6LL Feb 13 15:21:34.850841 systemd-networkd[1836]: lxc281ce85812b1: Gained IPv6LL Feb 13 15:21:37.152372 ntpd[1914]: Listen normally on 8 cilium_host 192.168.0.17:123 Feb 13 15:21:37.152526 ntpd[1914]: Listen normally on 9 cilium_net [fe80::549c:dbff:fe77:c034%4]:123 Feb 13 15:21:37.153452 ntpd[1914]: 13 Feb 15:21:37 ntpd[1914]: Listen normally on 8 cilium_host 192.168.0.17:123 Feb 13 15:21:37.153452 ntpd[1914]: 13 Feb 15:21:37 ntpd[1914]: Listen normally on 9 cilium_net [fe80::549c:dbff:fe77:c034%4]:123 Feb 13 15:21:37.153452 ntpd[1914]: 13 Feb 15:21:37 ntpd[1914]: Listen normally on 10 cilium_host [fe80::88ff:80ff:fe0d:6c77%5]:123 Feb 13 15:21:37.153452 ntpd[1914]: 13 Feb 15:21:37 ntpd[1914]: Listen normally on 11 cilium_vxlan [fe80::60e8:beff:fef9:82cb%6]:123 Feb 13 15:21:37.153452 ntpd[1914]: 13 Feb 15:21:37 ntpd[1914]: Listen normally on 12 lxc_health [fe80::58d8:b4ff:fe98:23f7%8]:123 Feb 13 15:21:37.153452 ntpd[1914]: 13 Feb 15:21:37 ntpd[1914]: Listen normally on 13 lxc281ce85812b1 [fe80::541a:deff:fe45:a6dc%10]:123 Feb 13 15:21:37.153452 ntpd[1914]: 13 Feb 15:21:37 ntpd[1914]: Listen normally on 14 lxc91f4e2f91095 [fe80::6475:ceff:feab:fef8%12]:123 Feb 13 15:21:37.152656 ntpd[1914]: Listen normally on 10 cilium_host [fe80::88ff:80ff:fe0d:6c77%5]:123 Feb 13 15:21:37.152727 ntpd[1914]: Listen normally on 11 cilium_vxlan [fe80::60e8:beff:fef9:82cb%6]:123 Feb 13 15:21:37.152797 ntpd[1914]: Listen normally on 12 lxc_health [fe80::58d8:b4ff:fe98:23f7%8]:123 Feb 13 15:21:37.152870 ntpd[1914]: Listen normally on 13 lxc281ce85812b1 [fe80::541a:deff:fe45:a6dc%10]:123 Feb 13 15:21:37.152948 ntpd[1914]: Listen normally on 14 lxc91f4e2f91095 [fe80::6475:ceff:feab:fef8%12]:123 Feb 13 15:21:41.415323 systemd[1]: Started sshd@9-172.31.28.163:22-147.75.109.163:48508.service - OpenSSH per-connection server daemon (147.75.109.163:48508). Feb 13 15:21:41.620564 sshd[4658]: Accepted publickey for core from 147.75.109.163 port 48508 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:21:41.622365 sshd-session[4658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:41.631887 systemd-logind[1921]: New session 10 of user core. Feb 13 15:21:41.640900 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:21:41.983979 sshd[4660]: Connection closed by 147.75.109.163 port 48508 Feb 13 15:21:41.985153 sshd-session[4658]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:41.996079 systemd[1]: sshd@9-172.31.28.163:22-147.75.109.163:48508.service: Deactivated successfully. Feb 13 15:21:42.005000 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:21:42.008787 systemd-logind[1921]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:21:42.013610 systemd-logind[1921]: Removed session 10. Feb 13 15:21:43.263688 containerd[1940]: time="2025-02-13T15:21:43.263079541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:43.263688 containerd[1940]: time="2025-02-13T15:21:43.263207317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:43.263688 containerd[1940]: time="2025-02-13T15:21:43.263245837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:43.263688 containerd[1940]: time="2025-02-13T15:21:43.263433349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:43.278223 containerd[1940]: time="2025-02-13T15:21:43.271635517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:43.278223 containerd[1940]: time="2025-02-13T15:21:43.271747993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:43.278223 containerd[1940]: time="2025-02-13T15:21:43.271787101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:43.278223 containerd[1940]: time="2025-02-13T15:21:43.271945897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:43.363448 systemd[1]: Started cri-containerd-c37bc784b740a891da28d6fd661c359550f8c1366e5aa827170f2a6b181be277.scope - libcontainer container c37bc784b740a891da28d6fd661c359550f8c1366e5aa827170f2a6b181be277. Feb 13 15:21:43.387831 systemd[1]: run-containerd-runc-k8s.io-9a9b4238cbbcebaf6a970955c4db51336f3c3a0c74407b5112978c3bd273b89d-runc.CWMYdn.mount: Deactivated successfully. Feb 13 15:21:43.412256 systemd[1]: Started cri-containerd-9a9b4238cbbcebaf6a970955c4db51336f3c3a0c74407b5112978c3bd273b89d.scope - libcontainer container 9a9b4238cbbcebaf6a970955c4db51336f3c3a0c74407b5112978c3bd273b89d. Feb 13 15:21:43.507588 containerd[1940]: time="2025-02-13T15:21:43.507425270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kbk7g,Uid:69b4169b-ebcb-413a-9cf8-8036752a5e9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c37bc784b740a891da28d6fd661c359550f8c1366e5aa827170f2a6b181be277\"" Feb 13 15:21:43.518979 containerd[1940]: time="2025-02-13T15:21:43.518726210Z" level=info msg="CreateContainer within sandbox \"c37bc784b740a891da28d6fd661c359550f8c1366e5aa827170f2a6b181be277\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:21:43.556945 containerd[1940]: time="2025-02-13T15:21:43.556755902Z" level=info msg="CreateContainer within sandbox \"c37bc784b740a891da28d6fd661c359550f8c1366e5aa827170f2a6b181be277\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c5807d6c6e2a09f99ef1a514d06404d6eab9a3e5a7a2c612364c069420e7057\"" Feb 13 15:21:43.560492 containerd[1940]: time="2025-02-13T15:21:43.560109338Z" level=info msg="StartContainer for \"5c5807d6c6e2a09f99ef1a514d06404d6eab9a3e5a7a2c612364c069420e7057\"" Feb 13 15:21:43.575367 containerd[1940]: time="2025-02-13T15:21:43.575306991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hbbgs,Uid:fd527888-ab9e-415c-8d98-e537605e5b3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a9b4238cbbcebaf6a970955c4db51336f3c3a0c74407b5112978c3bd273b89d\"" Feb 13 15:21:43.587949 containerd[1940]: time="2025-02-13T15:21:43.587887839Z" level=info msg="CreateContainer within sandbox \"9a9b4238cbbcebaf6a970955c4db51336f3c3a0c74407b5112978c3bd273b89d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:21:43.625472 containerd[1940]: time="2025-02-13T15:21:43.625363623Z" level=info msg="CreateContainer within sandbox \"9a9b4238cbbcebaf6a970955c4db51336f3c3a0c74407b5112978c3bd273b89d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c4ed822a4bae7ea1fbaa1de86a6f353ca240eea43b09fd7317a339842f3fafe\"" Feb 13 15:21:43.628830 containerd[1940]: time="2025-02-13T15:21:43.627133311Z" level=info msg="StartContainer for \"5c4ed822a4bae7ea1fbaa1de86a6f353ca240eea43b09fd7317a339842f3fafe\"" Feb 13 15:21:43.672724 systemd[1]: Started cri-containerd-5c5807d6c6e2a09f99ef1a514d06404d6eab9a3e5a7a2c612364c069420e7057.scope - libcontainer container 5c5807d6c6e2a09f99ef1a514d06404d6eab9a3e5a7a2c612364c069420e7057. Feb 13 15:21:43.723921 systemd[1]: Started cri-containerd-5c4ed822a4bae7ea1fbaa1de86a6f353ca240eea43b09fd7317a339842f3fafe.scope - libcontainer container 5c4ed822a4bae7ea1fbaa1de86a6f353ca240eea43b09fd7317a339842f3fafe. Feb 13 15:21:43.794132 containerd[1940]: time="2025-02-13T15:21:43.792625336Z" level=info msg="StartContainer for \"5c5807d6c6e2a09f99ef1a514d06404d6eab9a3e5a7a2c612364c069420e7057\" returns successfully" Feb 13 15:21:43.880636 containerd[1940]: time="2025-02-13T15:21:43.878853856Z" level=info msg="StartContainer for \"5c4ed822a4bae7ea1fbaa1de86a6f353ca240eea43b09fd7317a339842f3fafe\" returns successfully" Feb 13 15:21:44.214456 kubelet[3461]: I0213 15:21:44.213824 3461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hbbgs" podStartSLOduration=33.213772754 podStartE2EDuration="33.213772754s" podCreationTimestamp="2025-02-13 15:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:21:44.20930909 +0000 UTC m=+47.677206130" watchObservedRunningTime="2025-02-13 15:21:44.213772754 +0000 UTC m=+47.681669746" Feb 13 15:21:44.236009 kubelet[3461]: I0213 15:21:44.235656 3461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-kbk7g" podStartSLOduration=32.235632614 podStartE2EDuration="32.235632614s" podCreationTimestamp="2025-02-13 15:21:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:21:44.233108894 +0000 UTC m=+47.701005826" watchObservedRunningTime="2025-02-13 15:21:44.235632614 +0000 UTC m=+47.703529558" Feb 13 15:21:47.030132 systemd[1]: Started sshd@10-172.31.28.163:22-147.75.109.163:48510.service - OpenSSH per-connection server daemon (147.75.109.163:48510). Feb 13 15:21:47.218433 sshd[4845]: Accepted publickey for core from 147.75.109.163 port 48510 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:21:47.221508 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:47.232041 systemd-logind[1921]: New session 11 of user core. Feb 13 15:21:47.240875 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:21:47.523782 sshd[4847]: Connection closed by 147.75.109.163 port 48510 Feb 13 15:21:47.524754 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:47.530726 systemd-logind[1921]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:21:47.532870 systemd[1]: sshd@10-172.31.28.163:22-147.75.109.163:48510.service: Deactivated successfully. Feb 13 15:21:47.537146 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:21:47.539739 systemd-logind[1921]: Removed session 11. Feb 13 15:21:52.566083 systemd[1]: Started sshd@11-172.31.28.163:22-147.75.109.163:42568.service - OpenSSH per-connection server daemon (147.75.109.163:42568). Feb 13 15:21:52.754114 sshd[4864]: Accepted publickey for core from 147.75.109.163 port 42568 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:21:52.756951 sshd-session[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:52.766084 systemd-logind[1921]: New session 12 of user core. Feb 13 15:21:52.771887 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:21:53.013538 sshd[4866]: Connection closed by 147.75.109.163 port 42568 Feb 13 15:21:53.013416 sshd-session[4864]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:53.018616 systemd-logind[1921]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:21:53.019262 systemd[1]: sshd@11-172.31.28.163:22-147.75.109.163:42568.service: Deactivated successfully. Feb 13 15:21:53.023202 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:21:53.027763 systemd-logind[1921]: Removed session 12. Feb 13 15:21:58.062124 systemd[1]: Started sshd@12-172.31.28.163:22-147.75.109.163:42580.service - OpenSSH per-connection server daemon (147.75.109.163:42580). Feb 13 15:21:58.257641 sshd[4880]: Accepted publickey for core from 147.75.109.163 port 42580 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:21:58.260490 sshd-session[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:58.269752 systemd-logind[1921]: New session 13 of user core. Feb 13 15:21:58.279868 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:21:58.541139 sshd[4882]: Connection closed by 147.75.109.163 port 42580 Feb 13 15:21:58.542005 sshd-session[4880]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:58.548883 systemd[1]: sshd@12-172.31.28.163:22-147.75.109.163:42580.service: Deactivated successfully. Feb 13 15:21:58.552924 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:21:58.554825 systemd-logind[1921]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:21:58.557448 systemd-logind[1921]: Removed session 13. Feb 13 15:22:03.588836 systemd[1]: Started sshd@13-172.31.28.163:22-147.75.109.163:42508.service - OpenSSH per-connection server daemon (147.75.109.163:42508). Feb 13 15:22:03.792628 sshd[4894]: Accepted publickey for core from 147.75.109.163 port 42508 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:03.795829 sshd-session[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:03.816052 systemd-logind[1921]: New session 14 of user core. Feb 13 15:22:03.825012 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:22:04.099566 sshd[4896]: Connection closed by 147.75.109.163 port 42508 Feb 13 15:22:04.100509 sshd-session[4894]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:04.108626 systemd[1]: sshd@13-172.31.28.163:22-147.75.109.163:42508.service: Deactivated successfully. Feb 13 15:22:04.115707 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:22:04.119285 systemd-logind[1921]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:22:04.138094 systemd[1]: Started sshd@14-172.31.28.163:22-147.75.109.163:42512.service - OpenSSH per-connection server daemon (147.75.109.163:42512). Feb 13 15:22:04.141121 systemd-logind[1921]: Removed session 14. Feb 13 15:22:04.337871 sshd[4908]: Accepted publickey for core from 147.75.109.163 port 42512 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:04.341203 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:04.352618 systemd-logind[1921]: New session 15 of user core. Feb 13 15:22:04.359134 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:22:04.707018 sshd[4910]: Connection closed by 147.75.109.163 port 42512 Feb 13 15:22:04.707610 sshd-session[4908]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:04.721048 systemd[1]: sshd@14-172.31.28.163:22-147.75.109.163:42512.service: Deactivated successfully. Feb 13 15:22:04.727385 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:22:04.733961 systemd-logind[1921]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:22:04.758159 systemd[1]: Started sshd@15-172.31.28.163:22-147.75.109.163:42518.service - OpenSSH per-connection server daemon (147.75.109.163:42518). Feb 13 15:22:04.760488 systemd-logind[1921]: Removed session 15. Feb 13 15:22:04.964564 sshd[4919]: Accepted publickey for core from 147.75.109.163 port 42518 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:04.968296 sshd-session[4919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:04.982489 systemd-logind[1921]: New session 16 of user core. Feb 13 15:22:04.988979 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:22:05.237388 sshd[4921]: Connection closed by 147.75.109.163 port 42518 Feb 13 15:22:05.238763 sshd-session[4919]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:05.247102 systemd[1]: sshd@15-172.31.28.163:22-147.75.109.163:42518.service: Deactivated successfully. Feb 13 15:22:05.252306 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:22:05.256790 systemd-logind[1921]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:22:05.259340 systemd-logind[1921]: Removed session 16. Feb 13 15:22:10.279067 systemd[1]: Started sshd@16-172.31.28.163:22-147.75.109.163:50090.service - OpenSSH per-connection server daemon (147.75.109.163:50090). Feb 13 15:22:10.472036 sshd[4932]: Accepted publickey for core from 147.75.109.163 port 50090 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:10.475856 sshd-session[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:10.486642 systemd-logind[1921]: New session 17 of user core. Feb 13 15:22:10.498923 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:22:10.762134 sshd[4934]: Connection closed by 147.75.109.163 port 50090 Feb 13 15:22:10.763101 sshd-session[4932]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:10.768340 systemd-logind[1921]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:22:10.769222 systemd[1]: sshd@16-172.31.28.163:22-147.75.109.163:50090.service: Deactivated successfully. Feb 13 15:22:10.772949 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:22:10.778921 systemd-logind[1921]: Removed session 17. Feb 13 15:22:15.807149 systemd[1]: Started sshd@17-172.31.28.163:22-147.75.109.163:50104.service - OpenSSH per-connection server daemon (147.75.109.163:50104). Feb 13 15:22:15.995738 sshd[4949]: Accepted publickey for core from 147.75.109.163 port 50104 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:15.998904 sshd-session[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:16.011711 systemd-logind[1921]: New session 18 of user core. Feb 13 15:22:16.017965 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:22:16.261089 sshd[4951]: Connection closed by 147.75.109.163 port 50104 Feb 13 15:22:16.262022 sshd-session[4949]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:16.269524 systemd[1]: sshd@17-172.31.28.163:22-147.75.109.163:50104.service: Deactivated successfully. Feb 13 15:22:16.274797 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:22:16.277019 systemd-logind[1921]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:22:16.279454 systemd-logind[1921]: Removed session 18. Feb 13 15:22:21.305277 systemd[1]: Started sshd@18-172.31.28.163:22-147.75.109.163:46516.service - OpenSSH per-connection server daemon (147.75.109.163:46516). Feb 13 15:22:21.504012 sshd[4964]: Accepted publickey for core from 147.75.109.163 port 46516 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:21.506713 sshd-session[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:21.518148 systemd-logind[1921]: New session 19 of user core. Feb 13 15:22:21.522951 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:22:21.787982 sshd[4966]: Connection closed by 147.75.109.163 port 46516 Feb 13 15:22:21.789326 sshd-session[4964]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:21.794623 systemd[1]: sshd@18-172.31.28.163:22-147.75.109.163:46516.service: Deactivated successfully. Feb 13 15:22:21.799692 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:22:21.802940 systemd-logind[1921]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:22:21.805208 systemd-logind[1921]: Removed session 19. Feb 13 15:22:21.828081 systemd[1]: Started sshd@19-172.31.28.163:22-147.75.109.163:46528.service - OpenSSH per-connection server daemon (147.75.109.163:46528). Feb 13 15:22:22.012188 sshd[4976]: Accepted publickey for core from 147.75.109.163 port 46528 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:22.014924 sshd-session[4976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:22.022850 systemd-logind[1921]: New session 20 of user core. Feb 13 15:22:22.029841 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:22:22.341119 sshd[4978]: Connection closed by 147.75.109.163 port 46528 Feb 13 15:22:22.341680 sshd-session[4976]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:22.348523 systemd[1]: sshd@19-172.31.28.163:22-147.75.109.163:46528.service: Deactivated successfully. Feb 13 15:22:22.352425 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:22:22.356343 systemd-logind[1921]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:22:22.359737 systemd-logind[1921]: Removed session 20. Feb 13 15:22:22.378141 systemd[1]: Started sshd@20-172.31.28.163:22-147.75.109.163:46542.service - OpenSSH per-connection server daemon (147.75.109.163:46542). Feb 13 15:22:22.573821 sshd[4987]: Accepted publickey for core from 147.75.109.163 port 46542 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:22.576657 sshd-session[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:22.586958 systemd-logind[1921]: New session 21 of user core. Feb 13 15:22:22.589888 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:22:25.277929 sshd[4989]: Connection closed by 147.75.109.163 port 46542 Feb 13 15:22:25.281227 sshd-session[4987]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:25.292522 systemd[1]: sshd@20-172.31.28.163:22-147.75.109.163:46542.service: Deactivated successfully. Feb 13 15:22:25.303449 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:22:25.305154 systemd-logind[1921]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:22:25.331335 systemd[1]: Started sshd@21-172.31.28.163:22-147.75.109.163:46546.service - OpenSSH per-connection server daemon (147.75.109.163:46546). Feb 13 15:22:25.334509 systemd-logind[1921]: Removed session 21. Feb 13 15:22:25.531795 sshd[5005]: Accepted publickey for core from 147.75.109.163 port 46546 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:25.534483 sshd-session[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:25.548432 systemd-logind[1921]: New session 22 of user core. Feb 13 15:22:25.556949 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:22:26.095348 sshd[5007]: Connection closed by 147.75.109.163 port 46546 Feb 13 15:22:26.097000 sshd-session[5005]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:26.107083 systemd[1]: sshd@21-172.31.28.163:22-147.75.109.163:46546.service: Deactivated successfully. Feb 13 15:22:26.112021 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:22:26.115266 systemd-logind[1921]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:22:26.134275 systemd[1]: Started sshd@22-172.31.28.163:22-147.75.109.163:46550.service - OpenSSH per-connection server daemon (147.75.109.163:46550). Feb 13 15:22:26.136903 systemd-logind[1921]: Removed session 22. Feb 13 15:22:26.336659 sshd[5016]: Accepted publickey for core from 147.75.109.163 port 46550 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:26.339522 sshd-session[5016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:26.348833 systemd-logind[1921]: New session 23 of user core. Feb 13 15:22:26.352846 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:22:26.605904 sshd[5018]: Connection closed by 147.75.109.163 port 46550 Feb 13 15:22:26.607895 sshd-session[5016]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:26.615300 systemd-logind[1921]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:22:26.615861 systemd[1]: sshd@22-172.31.28.163:22-147.75.109.163:46550.service: Deactivated successfully. Feb 13 15:22:26.621136 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:22:26.626411 systemd-logind[1921]: Removed session 23. Feb 13 15:22:31.647505 systemd[1]: Started sshd@23-172.31.28.163:22-147.75.109.163:45522.service - OpenSSH per-connection server daemon (147.75.109.163:45522). Feb 13 15:22:31.842688 sshd[5029]: Accepted publickey for core from 147.75.109.163 port 45522 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:31.845240 sshd-session[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:31.854871 systemd-logind[1921]: New session 24 of user core. Feb 13 15:22:31.860840 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:22:32.112686 sshd[5031]: Connection closed by 147.75.109.163 port 45522 Feb 13 15:22:32.111752 sshd-session[5029]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:32.120496 systemd[1]: sshd@23-172.31.28.163:22-147.75.109.163:45522.service: Deactivated successfully. Feb 13 15:22:32.125514 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:22:32.127896 systemd-logind[1921]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:22:32.131202 systemd-logind[1921]: Removed session 24. Feb 13 15:22:37.150074 systemd[1]: Started sshd@24-172.31.28.163:22-147.75.109.163:45530.service - OpenSSH per-connection server daemon (147.75.109.163:45530). Feb 13 15:22:37.345136 sshd[5045]: Accepted publickey for core from 147.75.109.163 port 45530 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:37.349266 sshd-session[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:37.359238 systemd-logind[1921]: New session 25 of user core. Feb 13 15:22:37.368951 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:22:37.634423 sshd[5047]: Connection closed by 147.75.109.163 port 45530 Feb 13 15:22:37.635877 sshd-session[5045]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:37.643700 systemd[1]: sshd@24-172.31.28.163:22-147.75.109.163:45530.service: Deactivated successfully. Feb 13 15:22:37.648388 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:22:37.650064 systemd-logind[1921]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:22:37.653153 systemd-logind[1921]: Removed session 25. Feb 13 15:22:42.679145 systemd[1]: Started sshd@25-172.31.28.163:22-147.75.109.163:43132.service - OpenSSH per-connection server daemon (147.75.109.163:43132). Feb 13 15:22:42.873949 sshd[5057]: Accepted publickey for core from 147.75.109.163 port 43132 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:42.877022 sshd-session[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:42.887959 systemd-logind[1921]: New session 26 of user core. Feb 13 15:22:42.893918 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:22:43.156482 sshd[5061]: Connection closed by 147.75.109.163 port 43132 Feb 13 15:22:43.157398 sshd-session[5057]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:43.163000 systemd[1]: sshd@25-172.31.28.163:22-147.75.109.163:43132.service: Deactivated successfully. Feb 13 15:22:43.167797 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:22:43.171419 systemd-logind[1921]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:22:43.174625 systemd-logind[1921]: Removed session 26. Feb 13 15:22:48.199100 systemd[1]: Started sshd@26-172.31.28.163:22-147.75.109.163:43134.service - OpenSSH per-connection server daemon (147.75.109.163:43134). Feb 13 15:22:48.401883 sshd[5072]: Accepted publickey for core from 147.75.109.163 port 43134 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:48.405434 sshd-session[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:48.418495 systemd-logind[1921]: New session 27 of user core. Feb 13 15:22:48.425034 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:22:48.703632 sshd[5074]: Connection closed by 147.75.109.163 port 43134 Feb 13 15:22:48.705007 sshd-session[5072]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:48.712876 systemd-logind[1921]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:22:48.716166 systemd[1]: sshd@26-172.31.28.163:22-147.75.109.163:43134.service: Deactivated successfully. Feb 13 15:22:48.722536 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:22:48.738817 systemd-logind[1921]: Removed session 27. Feb 13 15:22:48.749370 systemd[1]: Started sshd@27-172.31.28.163:22-147.75.109.163:43136.service - OpenSSH per-connection server daemon (147.75.109.163:43136). Feb 13 15:22:48.939532 sshd[5085]: Accepted publickey for core from 147.75.109.163 port 43136 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:48.942476 sshd-session[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:48.951501 systemd-logind[1921]: New session 28 of user core. Feb 13 15:22:48.961855 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:22:51.947599 containerd[1940]: time="2025-02-13T15:22:51.946513990Z" level=info msg="StopContainer for \"ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707\" with timeout 30 (s)" Feb 13 15:22:51.951847 containerd[1940]: time="2025-02-13T15:22:51.950181634Z" level=info msg="Stop container \"ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707\" with signal terminated" Feb 13 15:22:51.996878 systemd[1]: cri-containerd-ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707.scope: Deactivated successfully. Feb 13 15:22:52.028441 containerd[1940]: time="2025-02-13T15:22:52.028343647Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:22:52.068304 containerd[1940]: time="2025-02-13T15:22:52.067990171Z" level=info msg="StopContainer for \"91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf\" with timeout 2 (s)" Feb 13 15:22:52.074725 containerd[1940]: time="2025-02-13T15:22:52.074527687Z" level=info msg="Stop container \"91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf\" with signal terminated" Feb 13 15:22:52.076891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707-rootfs.mount: Deactivated successfully. Feb 13 15:22:52.086637 kubelet[3461]: E0213 15:22:52.086499 3461 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:22:52.090336 containerd[1940]: time="2025-02-13T15:22:52.090205015Z" level=info msg="shim disconnected" id=ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707 namespace=k8s.io Feb 13 15:22:52.090640 containerd[1940]: time="2025-02-13T15:22:52.090302647Z" level=warning msg="cleaning up after shim disconnected" id=ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707 namespace=k8s.io Feb 13 15:22:52.090819 containerd[1940]: time="2025-02-13T15:22:52.090743503Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:22:52.098859 systemd-networkd[1836]: lxc_health: Link DOWN Feb 13 15:22:52.098880 systemd-networkd[1836]: lxc_health: Lost carrier Feb 13 15:22:52.126045 systemd[1]: cri-containerd-91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf.scope: Deactivated successfully. Feb 13 15:22:52.128744 systemd[1]: cri-containerd-91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf.scope: Consumed 16.496s CPU time. Feb 13 15:22:52.144253 containerd[1940]: time="2025-02-13T15:22:52.143992711Z" level=info msg="StopContainer for \"ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707\" returns successfully" Feb 13 15:22:52.147616 containerd[1940]: time="2025-02-13T15:22:52.145064563Z" level=info msg="StopPodSandbox for \"353d2a36b36e08459a22afa7e4a7e3b76c86908cf8ddbebc6249087aed9c36c5\"" Feb 13 15:22:52.147616 containerd[1940]: time="2025-02-13T15:22:52.145140691Z" level=info msg="Container to stop \"ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:22:52.150488 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-353d2a36b36e08459a22afa7e4a7e3b76c86908cf8ddbebc6249087aed9c36c5-shm.mount: Deactivated successfully. Feb 13 15:22:52.171986 systemd[1]: cri-containerd-353d2a36b36e08459a22afa7e4a7e3b76c86908cf8ddbebc6249087aed9c36c5.scope: Deactivated successfully. Feb 13 15:22:52.192239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf-rootfs.mount: Deactivated successfully. Feb 13 15:22:52.199959 containerd[1940]: time="2025-02-13T15:22:52.199771915Z" level=info msg="shim disconnected" id=91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf namespace=k8s.io Feb 13 15:22:52.201059 containerd[1940]: time="2025-02-13T15:22:52.200647891Z" level=warning msg="cleaning up after shim disconnected" id=91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf namespace=k8s.io Feb 13 15:22:52.201059 containerd[1940]: time="2025-02-13T15:22:52.200691643Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:22:52.246641 containerd[1940]: time="2025-02-13T15:22:52.245721236Z" level=info msg="StopContainer for \"91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf\" returns successfully" Feb 13 15:22:52.247020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-353d2a36b36e08459a22afa7e4a7e3b76c86908cf8ddbebc6249087aed9c36c5-rootfs.mount: Deactivated successfully. Feb 13 15:22:52.248732 containerd[1940]: time="2025-02-13T15:22:52.248315996Z" level=info msg="shim disconnected" id=353d2a36b36e08459a22afa7e4a7e3b76c86908cf8ddbebc6249087aed9c36c5 namespace=k8s.io Feb 13 15:22:52.248732 containerd[1940]: time="2025-02-13T15:22:52.248407808Z" level=warning msg="cleaning up after shim disconnected" id=353d2a36b36e08459a22afa7e4a7e3b76c86908cf8ddbebc6249087aed9c36c5 namespace=k8s.io Feb 13 15:22:52.248732 containerd[1940]: time="2025-02-13T15:22:52.248429900Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:22:52.251436 containerd[1940]: time="2025-02-13T15:22:52.251351240Z" level=info msg="StopPodSandbox for \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\"" Feb 13 15:22:52.251614 containerd[1940]: time="2025-02-13T15:22:52.251443304Z" level=info msg="Container to stop \"cc489fbfa0ee53ca9c944b84cc261dde070404cafa27679138e66e148fd415ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:22:52.251614 containerd[1940]: time="2025-02-13T15:22:52.251471120Z" level=info msg="Container to stop \"591b351a090754d3ddcbc17d44c7cad5ca101426712bb94fc3207f750d0a1d50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:22:52.251614 containerd[1940]: time="2025-02-13T15:22:52.251493332Z" level=info msg="Container to stop \"45efbda54661a4deb1c8728e3bbdfadb8c4e8ffbfa53834d221ab3cd33f0a7af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:22:52.251614 containerd[1940]: time="2025-02-13T15:22:52.251522816Z" level=info msg="Container to stop \"91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:22:52.251614 containerd[1940]: time="2025-02-13T15:22:52.251571788Z" level=info msg="Container to stop \"3425309e94071e2a60e7ad9074d3d17d80cc948f487611f54d1d0b79daa2a24b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:22:52.266864 systemd[1]: cri-containerd-2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79.scope: Deactivated successfully. Feb 13 15:22:52.281073 containerd[1940]: time="2025-02-13T15:22:52.280943312Z" level=info msg="TearDown network for sandbox \"353d2a36b36e08459a22afa7e4a7e3b76c86908cf8ddbebc6249087aed9c36c5\" successfully" Feb 13 15:22:52.281073 containerd[1940]: time="2025-02-13T15:22:52.280995056Z" level=info msg="StopPodSandbox for \"353d2a36b36e08459a22afa7e4a7e3b76c86908cf8ddbebc6249087aed9c36c5\" returns successfully" Feb 13 15:22:52.297118 kubelet[3461]: I0213 15:22:52.296970 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5n4g\" (UniqueName: \"kubernetes.io/projected/c4b23637-8634-482c-9df5-2c243302b0a3-kube-api-access-b5n4g\") pod \"c4b23637-8634-482c-9df5-2c243302b0a3\" (UID: \"c4b23637-8634-482c-9df5-2c243302b0a3\") " Feb 13 15:22:52.297348 kubelet[3461]: I0213 15:22:52.297136 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4b23637-8634-482c-9df5-2c243302b0a3-cilium-config-path\") pod \"c4b23637-8634-482c-9df5-2c243302b0a3\" (UID: \"c4b23637-8634-482c-9df5-2c243302b0a3\") " Feb 13 15:22:52.313187 kubelet[3461]: I0213 15:22:52.312787 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4b23637-8634-482c-9df5-2c243302b0a3-kube-api-access-b5n4g" (OuterVolumeSpecName: "kube-api-access-b5n4g") pod "c4b23637-8634-482c-9df5-2c243302b0a3" (UID: "c4b23637-8634-482c-9df5-2c243302b0a3"). InnerVolumeSpecName "kube-api-access-b5n4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:22:52.313822 kubelet[3461]: I0213 15:22:52.313537 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4b23637-8634-482c-9df5-2c243302b0a3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c4b23637-8634-482c-9df5-2c243302b0a3" (UID: "c4b23637-8634-482c-9df5-2c243302b0a3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:22:52.332665 containerd[1940]: time="2025-02-13T15:22:52.332301152Z" level=info msg="shim disconnected" id=2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79 namespace=k8s.io Feb 13 15:22:52.332665 containerd[1940]: time="2025-02-13T15:22:52.332388572Z" level=warning msg="cleaning up after shim disconnected" id=2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79 namespace=k8s.io Feb 13 15:22:52.332665 containerd[1940]: time="2025-02-13T15:22:52.332410292Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:22:52.356879 containerd[1940]: time="2025-02-13T15:22:52.356673548Z" level=info msg="TearDown network for sandbox \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\" successfully" Feb 13 15:22:52.356879 containerd[1940]: time="2025-02-13T15:22:52.356739332Z" level=info msg="StopPodSandbox for \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\" returns successfully" Feb 13 15:22:52.397923 kubelet[3461]: I0213 15:22:52.397871 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-xtables-lock\") pod \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " Feb 13 15:22:52.397923 kubelet[3461]: I0213 15:22:52.397933 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-bpf-maps\") pod \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " Feb 13 15:22:52.398207 kubelet[3461]: I0213 15:22:52.397971 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-cilium-cgroup\") pod \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " Feb 13 15:22:52.398207 kubelet[3461]: I0213 15:22:52.398026 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-cilium-run\") pod \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " Feb 13 15:22:52.398207 kubelet[3461]: I0213 15:22:52.398061 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-host-proc-sys-net\") pod \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " Feb 13 15:22:52.398207 kubelet[3461]: I0213 15:22:52.398093 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-lib-modules\") pod \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " Feb 13 15:22:52.398207 kubelet[3461]: I0213 15:22:52.398135 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/715204ab-cf39-4ea0-b1e5-71a69f9b7212-clustermesh-secrets\") pod \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " Feb 13 15:22:52.398207 kubelet[3461]: I0213 15:22:52.398168 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-hostproc\") pod \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " Feb 13 15:22:52.398528 kubelet[3461]: I0213 15:22:52.398209 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/715204ab-cf39-4ea0-b1e5-71a69f9b7212-cilium-config-path\") pod \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " Feb 13 15:22:52.398528 kubelet[3461]: I0213 15:22:52.398244 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-cni-path\") pod \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " Feb 13 15:22:52.398528 kubelet[3461]: I0213 15:22:52.398278 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-etc-cni-netd\") pod \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " Feb 13 15:22:52.398528 kubelet[3461]: I0213 15:22:52.398316 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-host-proc-sys-kernel\") pod \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " Feb 13 15:22:52.398528 kubelet[3461]: I0213 15:22:52.398354 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/715204ab-cf39-4ea0-b1e5-71a69f9b7212-hubble-tls\") pod \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " Feb 13 15:22:52.398528 kubelet[3461]: I0213 15:22:52.398391 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2twbb\" (UniqueName: \"kubernetes.io/projected/715204ab-cf39-4ea0-b1e5-71a69f9b7212-kube-api-access-2twbb\") pod \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\" (UID: \"715204ab-cf39-4ea0-b1e5-71a69f9b7212\") " Feb 13 15:22:52.398926 kubelet[3461]: I0213 15:22:52.398454 3461 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-b5n4g\" (UniqueName: \"kubernetes.io/projected/c4b23637-8634-482c-9df5-2c243302b0a3-kube-api-access-b5n4g\") on node \"ip-172-31-28-163\" DevicePath \"\"" Feb 13 15:22:52.398926 kubelet[3461]: I0213 15:22:52.398478 3461 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4b23637-8634-482c-9df5-2c243302b0a3-cilium-config-path\") on node \"ip-172-31-28-163\" DevicePath \"\"" Feb 13 15:22:52.398926 kubelet[3461]: I0213 15:22:52.398654 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "715204ab-cf39-4ea0-b1e5-71a69f9b7212" (UID: "715204ab-cf39-4ea0-b1e5-71a69f9b7212"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:22:52.398926 kubelet[3461]: I0213 15:22:52.398753 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "715204ab-cf39-4ea0-b1e5-71a69f9b7212" (UID: "715204ab-cf39-4ea0-b1e5-71a69f9b7212"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:22:52.398926 kubelet[3461]: I0213 15:22:52.398819 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "715204ab-cf39-4ea0-b1e5-71a69f9b7212" (UID: "715204ab-cf39-4ea0-b1e5-71a69f9b7212"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:22:52.399496 kubelet[3461]: I0213 15:22:52.398867 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "715204ab-cf39-4ea0-b1e5-71a69f9b7212" (UID: "715204ab-cf39-4ea0-b1e5-71a69f9b7212"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:22:52.399496 kubelet[3461]: I0213 15:22:52.398911 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "715204ab-cf39-4ea0-b1e5-71a69f9b7212" (UID: "715204ab-cf39-4ea0-b1e5-71a69f9b7212"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:22:52.399496 kubelet[3461]: I0213 15:22:52.398954 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "715204ab-cf39-4ea0-b1e5-71a69f9b7212" (UID: "715204ab-cf39-4ea0-b1e5-71a69f9b7212"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:22:52.399496 kubelet[3461]: I0213 15:22:52.399179 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-cni-path" (OuterVolumeSpecName: "cni-path") pod "715204ab-cf39-4ea0-b1e5-71a69f9b7212" (UID: "715204ab-cf39-4ea0-b1e5-71a69f9b7212"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:22:52.410585 kubelet[3461]: I0213 15:22:52.406406 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-hostproc" (OuterVolumeSpecName: "hostproc") pod "715204ab-cf39-4ea0-b1e5-71a69f9b7212" (UID: "715204ab-cf39-4ea0-b1e5-71a69f9b7212"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:22:52.410585 kubelet[3461]: I0213 15:22:52.406487 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "715204ab-cf39-4ea0-b1e5-71a69f9b7212" (UID: "715204ab-cf39-4ea0-b1e5-71a69f9b7212"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:22:52.411856 kubelet[3461]: I0213 15:22:52.411783 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "715204ab-cf39-4ea0-b1e5-71a69f9b7212" (UID: "715204ab-cf39-4ea0-b1e5-71a69f9b7212"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:22:52.414199 kubelet[3461]: I0213 15:22:52.414151 3461 scope.go:117] "RemoveContainer" containerID="ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707" Feb 13 15:22:52.417481 kubelet[3461]: I0213 15:22:52.417142 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/715204ab-cf39-4ea0-b1e5-71a69f9b7212-kube-api-access-2twbb" (OuterVolumeSpecName: "kube-api-access-2twbb") pod "715204ab-cf39-4ea0-b1e5-71a69f9b7212" (UID: "715204ab-cf39-4ea0-b1e5-71a69f9b7212"). InnerVolumeSpecName "kube-api-access-2twbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:22:52.423032 kubelet[3461]: I0213 15:22:52.422964 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/715204ab-cf39-4ea0-b1e5-71a69f9b7212-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "715204ab-cf39-4ea0-b1e5-71a69f9b7212" (UID: "715204ab-cf39-4ea0-b1e5-71a69f9b7212"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:22:52.427347 kubelet[3461]: I0213 15:22:52.427274 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/715204ab-cf39-4ea0-b1e5-71a69f9b7212-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "715204ab-cf39-4ea0-b1e5-71a69f9b7212" (UID: "715204ab-cf39-4ea0-b1e5-71a69f9b7212"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:22:52.428109 containerd[1940]: time="2025-02-13T15:22:52.428048145Z" level=info msg="RemoveContainer for \"ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707\"" Feb 13 15:22:52.431318 kubelet[3461]: I0213 15:22:52.431267 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/715204ab-cf39-4ea0-b1e5-71a69f9b7212-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "715204ab-cf39-4ea0-b1e5-71a69f9b7212" (UID: "715204ab-cf39-4ea0-b1e5-71a69f9b7212"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:22:52.439950 containerd[1940]: time="2025-02-13T15:22:52.439900125Z" level=info msg="RemoveContainer for \"ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707\" returns successfully" Feb 13 15:22:52.440648 systemd[1]: Removed slice kubepods-besteffort-podc4b23637_8634_482c_9df5_2c243302b0a3.slice - libcontainer container kubepods-besteffort-podc4b23637_8634_482c_9df5_2c243302b0a3.slice. Feb 13 15:22:52.442052 kubelet[3461]: I0213 15:22:52.442011 3461 scope.go:117] "RemoveContainer" containerID="ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707" Feb 13 15:22:52.445452 containerd[1940]: time="2025-02-13T15:22:52.445352193Z" level=error msg="ContainerStatus for \"ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707\": not found" Feb 13 15:22:52.445956 kubelet[3461]: E0213 15:22:52.445909 3461 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707\": not found" containerID="ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707" Feb 13 15:22:52.446294 kubelet[3461]: I0213 15:22:52.446116 3461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707"} err="failed to get container status \"ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707\": rpc error: code = NotFound desc = an error occurred when try to find container \"ccee0a3abbc6b6fcd7c4b8313676ead46d2a20d5120538eb98d45ffd2f7e6707\": not found" Feb 13 15:22:52.446622 kubelet[3461]: I0213 15:22:52.446422 3461 scope.go:117] "RemoveContainer" containerID="91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf" Feb 13 15:22:52.454102 containerd[1940]: time="2025-02-13T15:22:52.453345093Z" level=info msg="RemoveContainer for \"91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf\"" Feb 13 15:22:52.462197 containerd[1940]: time="2025-02-13T15:22:52.461999997Z" level=info msg="RemoveContainer for \"91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf\" returns successfully" Feb 13 15:22:52.462582 kubelet[3461]: I0213 15:22:52.462521 3461 scope.go:117] "RemoveContainer" containerID="45efbda54661a4deb1c8728e3bbdfadb8c4e8ffbfa53834d221ab3cd33f0a7af" Feb 13 15:22:52.467893 systemd[1]: Removed slice kubepods-burstable-pod715204ab_cf39_4ea0_b1e5_71a69f9b7212.slice - libcontainer container kubepods-burstable-pod715204ab_cf39_4ea0_b1e5_71a69f9b7212.slice. Feb 13 15:22:52.471811 containerd[1940]: time="2025-02-13T15:22:52.469682553Z" level=info msg="RemoveContainer for \"45efbda54661a4deb1c8728e3bbdfadb8c4e8ffbfa53834d221ab3cd33f0a7af\"" Feb 13 15:22:52.471171 systemd[1]: kubepods-burstable-pod715204ab_cf39_4ea0_b1e5_71a69f9b7212.slice: Consumed 16.666s CPU time. Feb 13 15:22:52.475860 containerd[1940]: time="2025-02-13T15:22:52.475778589Z" level=info msg="RemoveContainer for \"45efbda54661a4deb1c8728e3bbdfadb8c4e8ffbfa53834d221ab3cd33f0a7af\" returns successfully" Feb 13 15:22:52.477774 kubelet[3461]: I0213 15:22:52.476134 3461 scope.go:117] "RemoveContainer" containerID="3425309e94071e2a60e7ad9074d3d17d80cc948f487611f54d1d0b79daa2a24b" Feb 13 15:22:52.484963 containerd[1940]: time="2025-02-13T15:22:52.484406697Z" level=info msg="RemoveContainer for \"3425309e94071e2a60e7ad9074d3d17d80cc948f487611f54d1d0b79daa2a24b\"" Feb 13 15:22:52.493642 containerd[1940]: time="2025-02-13T15:22:52.492022617Z" level=info msg="RemoveContainer for \"3425309e94071e2a60e7ad9074d3d17d80cc948f487611f54d1d0b79daa2a24b\" returns successfully" Feb 13 15:22:52.495359 kubelet[3461]: I0213 15:22:52.495284 3461 scope.go:117] "RemoveContainer" containerID="591b351a090754d3ddcbc17d44c7cad5ca101426712bb94fc3207f750d0a1d50" Feb 13 15:22:52.498912 kubelet[3461]: I0213 15:22:52.498847 3461 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-bpf-maps\") on node \"ip-172-31-28-163\" DevicePath \"\"" Feb 13 15:22:52.498912 kubelet[3461]: I0213 15:22:52.498910 3461 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-cilium-cgroup\") on node \"ip-172-31-28-163\" DevicePath \"\"" Feb 13 15:22:52.499150 kubelet[3461]: I0213 15:22:52.498938 3461 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-cilium-run\") on node \"ip-172-31-28-163\" DevicePath \"\"" Feb 13 15:22:52.499150 kubelet[3461]: I0213 15:22:52.498960 3461 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-host-proc-sys-net\") on node \"ip-172-31-28-163\" DevicePath \"\"" Feb 13 15:22:52.499150 kubelet[3461]: I0213 15:22:52.498982 3461 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-lib-modules\") on node \"ip-172-31-28-163\" DevicePath \"\"" Feb 13 15:22:52.499150 kubelet[3461]: I0213 15:22:52.499013 3461 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/715204ab-cf39-4ea0-b1e5-71a69f9b7212-clustermesh-secrets\") on node \"ip-172-31-28-163\" DevicePath \"\"" Feb 13 15:22:52.499150 kubelet[3461]: I0213 15:22:52.499048 3461 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-hostproc\") on node \"ip-172-31-28-163\" DevicePath \"\"" Feb 13 15:22:52.499150 kubelet[3461]: I0213 15:22:52.499076 3461 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/715204ab-cf39-4ea0-b1e5-71a69f9b7212-cilium-config-path\") on node \"ip-172-31-28-163\" DevicePath \"\"" Feb 13 15:22:52.499150 kubelet[3461]: I0213 15:22:52.499098 3461 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-cni-path\") on node \"ip-172-31-28-163\" DevicePath \"\"" Feb 13 15:22:52.499150 kubelet[3461]: I0213 15:22:52.499129 3461 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-etc-cni-netd\") on node \"ip-172-31-28-163\" DevicePath \"\"" Feb 13 15:22:52.499655 kubelet[3461]: I0213 15:22:52.499151 3461 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-host-proc-sys-kernel\") on node \"ip-172-31-28-163\" DevicePath \"\"" Feb 13 15:22:52.499655 kubelet[3461]: I0213 15:22:52.499172 3461 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/715204ab-cf39-4ea0-b1e5-71a69f9b7212-hubble-tls\") on node \"ip-172-31-28-163\" DevicePath \"\"" Feb 13 15:22:52.499655 kubelet[3461]: I0213 15:22:52.499201 3461 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2twbb\" (UniqueName: \"kubernetes.io/projected/715204ab-cf39-4ea0-b1e5-71a69f9b7212-kube-api-access-2twbb\") on node \"ip-172-31-28-163\" DevicePath \"\"" Feb 13 15:22:52.499655 kubelet[3461]: I0213 15:22:52.499232 3461 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/715204ab-cf39-4ea0-b1e5-71a69f9b7212-xtables-lock\") on node \"ip-172-31-28-163\" DevicePath \"\"" Feb 13 15:22:52.506349 containerd[1940]: time="2025-02-13T15:22:52.506232885Z" level=info msg="RemoveContainer for \"591b351a090754d3ddcbc17d44c7cad5ca101426712bb94fc3207f750d0a1d50\"" Feb 13 15:22:52.526950 containerd[1940]: time="2025-02-13T15:22:52.526838037Z" level=info msg="RemoveContainer for \"591b351a090754d3ddcbc17d44c7cad5ca101426712bb94fc3207f750d0a1d50\" returns successfully" Feb 13 15:22:52.527850 kubelet[3461]: I0213 15:22:52.527564 3461 scope.go:117] "RemoveContainer" containerID="cc489fbfa0ee53ca9c944b84cc261dde070404cafa27679138e66e148fd415ff" Feb 13 15:22:52.531814 containerd[1940]: time="2025-02-13T15:22:52.531652773Z" level=info msg="RemoveContainer for \"cc489fbfa0ee53ca9c944b84cc261dde070404cafa27679138e66e148fd415ff\"" Feb 13 15:22:52.537502 containerd[1940]: time="2025-02-13T15:22:52.537122973Z" level=info msg="RemoveContainer for \"cc489fbfa0ee53ca9c944b84cc261dde070404cafa27679138e66e148fd415ff\" returns successfully" Feb 13 15:22:52.538145 kubelet[3461]: I0213 15:22:52.538052 3461 scope.go:117] "RemoveContainer" containerID="91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf" Feb 13 15:22:52.540093 containerd[1940]: time="2025-02-13T15:22:52.539849409Z" level=error msg="ContainerStatus for \"91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf\": not found" Feb 13 15:22:52.540868 kubelet[3461]: E0213 15:22:52.540744 3461 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf\": not found" containerID="91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf" Feb 13 15:22:52.540868 kubelet[3461]: I0213 15:22:52.540801 3461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf"} err="failed to get container status \"91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf\": rpc error: code = NotFound desc = an error occurred when try to find container \"91d925fe09478f1a5c9758c2d65314908114232a25099a710975302d35a044bf\": not found" Feb 13 15:22:52.540868 kubelet[3461]: I0213 15:22:52.540842 3461 scope.go:117] "RemoveContainer" containerID="45efbda54661a4deb1c8728e3bbdfadb8c4e8ffbfa53834d221ab3cd33f0a7af" Feb 13 15:22:52.542329 containerd[1940]: time="2025-02-13T15:22:52.541896069Z" level=error msg="ContainerStatus for \"45efbda54661a4deb1c8728e3bbdfadb8c4e8ffbfa53834d221ab3cd33f0a7af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"45efbda54661a4deb1c8728e3bbdfadb8c4e8ffbfa53834d221ab3cd33f0a7af\": not found" Feb 13 15:22:52.543024 kubelet[3461]: E0213 15:22:52.542980 3461 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"45efbda54661a4deb1c8728e3bbdfadb8c4e8ffbfa53834d221ab3cd33f0a7af\": not found" containerID="45efbda54661a4deb1c8728e3bbdfadb8c4e8ffbfa53834d221ab3cd33f0a7af" Feb 13 15:22:52.543523 kubelet[3461]: I0213 15:22:52.543278 3461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"45efbda54661a4deb1c8728e3bbdfadb8c4e8ffbfa53834d221ab3cd33f0a7af"} err="failed to get container status \"45efbda54661a4deb1c8728e3bbdfadb8c4e8ffbfa53834d221ab3cd33f0a7af\": rpc error: code = NotFound desc = an error occurred when try to find container \"45efbda54661a4deb1c8728e3bbdfadb8c4e8ffbfa53834d221ab3cd33f0a7af\": not found" Feb 13 15:22:52.543523 kubelet[3461]: I0213 15:22:52.543391 3461 scope.go:117] "RemoveContainer" containerID="3425309e94071e2a60e7ad9074d3d17d80cc948f487611f54d1d0b79daa2a24b" Feb 13 15:22:52.545085 containerd[1940]: time="2025-02-13T15:22:52.544899009Z" level=error msg="ContainerStatus for \"3425309e94071e2a60e7ad9074d3d17d80cc948f487611f54d1d0b79daa2a24b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3425309e94071e2a60e7ad9074d3d17d80cc948f487611f54d1d0b79daa2a24b\": not found" Feb 13 15:22:52.545768 kubelet[3461]: E0213 15:22:52.545681 3461 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3425309e94071e2a60e7ad9074d3d17d80cc948f487611f54d1d0b79daa2a24b\": not found" containerID="3425309e94071e2a60e7ad9074d3d17d80cc948f487611f54d1d0b79daa2a24b" Feb 13 15:22:52.545913 kubelet[3461]: I0213 15:22:52.545770 3461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3425309e94071e2a60e7ad9074d3d17d80cc948f487611f54d1d0b79daa2a24b"} err="failed to get container status \"3425309e94071e2a60e7ad9074d3d17d80cc948f487611f54d1d0b79daa2a24b\": rpc error: code = NotFound desc = an error occurred when try to find container \"3425309e94071e2a60e7ad9074d3d17d80cc948f487611f54d1d0b79daa2a24b\": not found" Feb 13 15:22:52.545913 kubelet[3461]: I0213 15:22:52.545821 3461 scope.go:117] "RemoveContainer" containerID="591b351a090754d3ddcbc17d44c7cad5ca101426712bb94fc3207f750d0a1d50" Feb 13 15:22:52.547694 containerd[1940]: time="2025-02-13T15:22:52.546665145Z" level=error msg="ContainerStatus for \"591b351a090754d3ddcbc17d44c7cad5ca101426712bb94fc3207f750d0a1d50\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"591b351a090754d3ddcbc17d44c7cad5ca101426712bb94fc3207f750d0a1d50\": not found" Feb 13 15:22:52.547694 containerd[1940]: time="2025-02-13T15:22:52.547573737Z" level=error msg="ContainerStatus for \"cc489fbfa0ee53ca9c944b84cc261dde070404cafa27679138e66e148fd415ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cc489fbfa0ee53ca9c944b84cc261dde070404cafa27679138e66e148fd415ff\": not found" Feb 13 15:22:52.548105 kubelet[3461]: E0213 15:22:52.547023 3461 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"591b351a090754d3ddcbc17d44c7cad5ca101426712bb94fc3207f750d0a1d50\": not found" containerID="591b351a090754d3ddcbc17d44c7cad5ca101426712bb94fc3207f750d0a1d50" Feb 13 15:22:52.548105 kubelet[3461]: I0213 15:22:52.547070 3461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"591b351a090754d3ddcbc17d44c7cad5ca101426712bb94fc3207f750d0a1d50"} err="failed to get container status \"591b351a090754d3ddcbc17d44c7cad5ca101426712bb94fc3207f750d0a1d50\": rpc error: code = NotFound desc = an error occurred when try to find container \"591b351a090754d3ddcbc17d44c7cad5ca101426712bb94fc3207f750d0a1d50\": not found" Feb 13 15:22:52.548105 kubelet[3461]: I0213 15:22:52.547107 3461 scope.go:117] "RemoveContainer" containerID="cc489fbfa0ee53ca9c944b84cc261dde070404cafa27679138e66e148fd415ff" Feb 13 15:22:52.548920 kubelet[3461]: E0213 15:22:52.548801 3461 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cc489fbfa0ee53ca9c944b84cc261dde070404cafa27679138e66e148fd415ff\": not found" containerID="cc489fbfa0ee53ca9c944b84cc261dde070404cafa27679138e66e148fd415ff" Feb 13 15:22:52.548920 kubelet[3461]: I0213 15:22:52.548864 3461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cc489fbfa0ee53ca9c944b84cc261dde070404cafa27679138e66e148fd415ff"} err="failed to get container status \"cc489fbfa0ee53ca9c944b84cc261dde070404cafa27679138e66e148fd415ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"cc489fbfa0ee53ca9c944b84cc261dde070404cafa27679138e66e148fd415ff\": not found" Feb 13 15:22:52.841748 kubelet[3461]: I0213 15:22:52.841604 3461 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="715204ab-cf39-4ea0-b1e5-71a69f9b7212" path="/var/lib/kubelet/pods/715204ab-cf39-4ea0-b1e5-71a69f9b7212/volumes" Feb 13 15:22:52.844498 kubelet[3461]: I0213 15:22:52.844431 3461 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4b23637-8634-482c-9df5-2c243302b0a3" path="/var/lib/kubelet/pods/c4b23637-8634-482c-9df5-2c243302b0a3/volumes" Feb 13 15:22:52.981834 systemd[1]: var-lib-kubelet-pods-c4b23637\x2d8634\x2d482c\x2d9df5\x2d2c243302b0a3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db5n4g.mount: Deactivated successfully. Feb 13 15:22:52.982092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79-rootfs.mount: Deactivated successfully. Feb 13 15:22:52.982271 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79-shm.mount: Deactivated successfully. Feb 13 15:22:52.982423 systemd[1]: var-lib-kubelet-pods-715204ab\x2dcf39\x2d4ea0\x2db1e5\x2d71a69f9b7212-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2twbb.mount: Deactivated successfully. Feb 13 15:22:52.982624 systemd[1]: var-lib-kubelet-pods-715204ab\x2dcf39\x2d4ea0\x2db1e5\x2d71a69f9b7212-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:22:52.982783 systemd[1]: var-lib-kubelet-pods-715204ab\x2dcf39\x2d4ea0\x2db1e5\x2d71a69f9b7212-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:22:53.866710 sshd[5087]: Connection closed by 147.75.109.163 port 43136 Feb 13 15:22:53.867803 sshd-session[5085]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:53.875536 systemd[1]: sshd@27-172.31.28.163:22-147.75.109.163:43136.service: Deactivated successfully. Feb 13 15:22:53.880603 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:22:53.880937 systemd[1]: session-28.scope: Consumed 2.212s CPU time. Feb 13 15:22:53.882342 systemd-logind[1921]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:22:53.886362 systemd-logind[1921]: Removed session 28. Feb 13 15:22:53.905084 systemd[1]: Started sshd@28-172.31.28.163:22-147.75.109.163:34410.service - OpenSSH per-connection server daemon (147.75.109.163:34410). Feb 13 15:22:54.096460 sshd[5249]: Accepted publickey for core from 147.75.109.163 port 34410 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:54.099095 sshd-session[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:54.107259 systemd-logind[1921]: New session 29 of user core. Feb 13 15:22:54.116933 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 15:22:54.152468 ntpd[1914]: Deleting interface #12 lxc_health, fe80::58d8:b4ff:fe98:23f7%8#123, interface stats: received=0, sent=0, dropped=0, active_time=77 secs Feb 13 15:22:54.153081 ntpd[1914]: 13 Feb 15:22:54 ntpd[1914]: Deleting interface #12 lxc_health, fe80::58d8:b4ff:fe98:23f7%8#123, interface stats: received=0, sent=0, dropped=0, active_time=77 secs Feb 13 15:22:55.701083 sshd[5251]: Connection closed by 147.75.109.163 port 34410 Feb 13 15:22:55.703905 sshd-session[5249]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:55.714347 systemd[1]: sshd@28-172.31.28.163:22-147.75.109.163:34410.service: Deactivated successfully. Feb 13 15:22:55.728903 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 15:22:55.731702 systemd[1]: session-29.scope: Consumed 1.394s CPU time. Feb 13 15:22:55.734060 systemd-logind[1921]: Session 29 logged out. Waiting for processes to exit. Feb 13 15:22:55.746694 kubelet[3461]: I0213 15:22:55.745816 3461 topology_manager.go:215] "Topology Admit Handler" podUID="2f6ee53b-674f-488b-973a-280260cc7419" podNamespace="kube-system" podName="cilium-wnkcd" Feb 13 15:22:55.746694 kubelet[3461]: E0213 15:22:55.745930 3461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="715204ab-cf39-4ea0-b1e5-71a69f9b7212" containerName="clean-cilium-state" Feb 13 15:22:55.746694 kubelet[3461]: E0213 15:22:55.745953 3461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="715204ab-cf39-4ea0-b1e5-71a69f9b7212" containerName="mount-cgroup" Feb 13 15:22:55.746694 kubelet[3461]: E0213 15:22:55.745967 3461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="715204ab-cf39-4ea0-b1e5-71a69f9b7212" containerName="apply-sysctl-overwrites" Feb 13 15:22:55.746694 kubelet[3461]: E0213 15:22:55.745992 3461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c4b23637-8634-482c-9df5-2c243302b0a3" containerName="cilium-operator" Feb 13 15:22:55.746694 kubelet[3461]: E0213 15:22:55.746010 3461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="715204ab-cf39-4ea0-b1e5-71a69f9b7212" containerName="mount-bpf-fs" Feb 13 15:22:55.746694 kubelet[3461]: E0213 15:22:55.746029 3461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="715204ab-cf39-4ea0-b1e5-71a69f9b7212" containerName="cilium-agent" Feb 13 15:22:55.746694 kubelet[3461]: I0213 15:22:55.746072 3461 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b23637-8634-482c-9df5-2c243302b0a3" containerName="cilium-operator" Feb 13 15:22:55.746694 kubelet[3461]: I0213 15:22:55.746088 3461 memory_manager.go:354] "RemoveStaleState removing state" podUID="715204ab-cf39-4ea0-b1e5-71a69f9b7212" containerName="cilium-agent" Feb 13 15:22:55.761080 systemd[1]: Started sshd@29-172.31.28.163:22-147.75.109.163:34424.service - OpenSSH per-connection server daemon (147.75.109.163:34424). Feb 13 15:22:55.768014 systemd-logind[1921]: Removed session 29. Feb 13 15:22:55.778669 kubelet[3461]: W0213 15:22:55.777895 3461 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-28-163" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-163' and this object Feb 13 15:22:55.778669 kubelet[3461]: E0213 15:22:55.777980 3461 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-28-163" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-163' and this object Feb 13 15:22:55.779823 kubelet[3461]: W0213 15:22:55.779143 3461 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-28-163" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-163' and this object Feb 13 15:22:55.779823 kubelet[3461]: E0213 15:22:55.779256 3461 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-28-163" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-163' and this object Feb 13 15:22:55.779823 kubelet[3461]: W0213 15:22:55.779438 3461 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-28-163" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-163' and this object Feb 13 15:22:55.779823 kubelet[3461]: E0213 15:22:55.779466 3461 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-28-163" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-163' and this object Feb 13 15:22:55.779823 kubelet[3461]: W0213 15:22:55.779572 3461 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-28-163" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-163' and this object Feb 13 15:22:55.780178 kubelet[3461]: E0213 15:22:55.779605 3461 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-28-163" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-163' and this object Feb 13 15:22:55.786372 systemd[1]: Created slice kubepods-burstable-pod2f6ee53b_674f_488b_973a_280260cc7419.slice - libcontainer container kubepods-burstable-pod2f6ee53b_674f_488b_973a_280260cc7419.slice. Feb 13 15:22:55.820769 kubelet[3461]: I0213 15:22:55.820685 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2f6ee53b-674f-488b-973a-280260cc7419-cilium-run\") pod \"cilium-wnkcd\" (UID: \"2f6ee53b-674f-488b-973a-280260cc7419\") " pod="kube-system/cilium-wnkcd" Feb 13 15:22:55.820769 kubelet[3461]: I0213 15:22:55.820769 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2f6ee53b-674f-488b-973a-280260cc7419-clustermesh-secrets\") pod \"cilium-wnkcd\" (UID: \"2f6ee53b-674f-488b-973a-280260cc7419\") " pod="kube-system/cilium-wnkcd" Feb 13 15:22:55.821010 kubelet[3461]: I0213 15:22:55.820816 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2f6ee53b-674f-488b-973a-280260cc7419-cilium-config-path\") pod \"cilium-wnkcd\" (UID: \"2f6ee53b-674f-488b-973a-280260cc7419\") " pod="kube-system/cilium-wnkcd" Feb 13 15:22:55.821010 kubelet[3461]: I0213 15:22:55.820859 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2f6ee53b-674f-488b-973a-280260cc7419-bpf-maps\") pod \"cilium-wnkcd\" (UID: \"2f6ee53b-674f-488b-973a-280260cc7419\") " pod="kube-system/cilium-wnkcd" Feb 13 15:22:55.821010 kubelet[3461]: I0213 15:22:55.820898 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f6ee53b-674f-488b-973a-280260cc7419-xtables-lock\") pod \"cilium-wnkcd\" (UID: \"2f6ee53b-674f-488b-973a-280260cc7419\") " pod="kube-system/cilium-wnkcd" Feb 13 15:22:55.821010 kubelet[3461]: I0213 15:22:55.820935 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2f6ee53b-674f-488b-973a-280260cc7419-cilium-ipsec-secrets\") pod \"cilium-wnkcd\" (UID: \"2f6ee53b-674f-488b-973a-280260cc7419\") " pod="kube-system/cilium-wnkcd" Feb 13 15:22:55.821010 kubelet[3461]: I0213 15:22:55.820969 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2f6ee53b-674f-488b-973a-280260cc7419-hubble-tls\") pod \"cilium-wnkcd\" (UID: \"2f6ee53b-674f-488b-973a-280260cc7419\") " pod="kube-system/cilium-wnkcd" Feb 13 15:22:55.821010 kubelet[3461]: I0213 15:22:55.821004 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44lsq\" (UniqueName: \"kubernetes.io/projected/2f6ee53b-674f-488b-973a-280260cc7419-kube-api-access-44lsq\") pod \"cilium-wnkcd\" (UID: \"2f6ee53b-674f-488b-973a-280260cc7419\") " pod="kube-system/cilium-wnkcd" Feb 13 15:22:55.821364 kubelet[3461]: I0213 15:22:55.821043 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2f6ee53b-674f-488b-973a-280260cc7419-hostproc\") pod \"cilium-wnkcd\" (UID: \"2f6ee53b-674f-488b-973a-280260cc7419\") " pod="kube-system/cilium-wnkcd" Feb 13 15:22:55.821364 kubelet[3461]: I0213 15:22:55.821189 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f6ee53b-674f-488b-973a-280260cc7419-lib-modules\") pod \"cilium-wnkcd\" (UID: \"2f6ee53b-674f-488b-973a-280260cc7419\") " pod="kube-system/cilium-wnkcd" Feb 13 15:22:55.822725 kubelet[3461]: I0213 15:22:55.821642 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2f6ee53b-674f-488b-973a-280260cc7419-host-proc-sys-net\") pod \"cilium-wnkcd\" (UID: \"2f6ee53b-674f-488b-973a-280260cc7419\") " pod="kube-system/cilium-wnkcd" Feb 13 15:22:55.822725 kubelet[3461]: I0213 15:22:55.821758 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2f6ee53b-674f-488b-973a-280260cc7419-cilium-cgroup\") pod \"cilium-wnkcd\" (UID: \"2f6ee53b-674f-488b-973a-280260cc7419\") " pod="kube-system/cilium-wnkcd" Feb 13 15:22:55.822725 kubelet[3461]: I0213 15:22:55.821831 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f6ee53b-674f-488b-973a-280260cc7419-etc-cni-netd\") pod \"cilium-wnkcd\" (UID: \"2f6ee53b-674f-488b-973a-280260cc7419\") " pod="kube-system/cilium-wnkcd" Feb 13 15:22:55.822725 kubelet[3461]: I0213 15:22:55.822329 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2f6ee53b-674f-488b-973a-280260cc7419-host-proc-sys-kernel\") pod \"cilium-wnkcd\" (UID: \"2f6ee53b-674f-488b-973a-280260cc7419\") " pod="kube-system/cilium-wnkcd" Feb 13 15:22:55.822725 kubelet[3461]: I0213 15:22:55.822459 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2f6ee53b-674f-488b-973a-280260cc7419-cni-path\") pod \"cilium-wnkcd\" (UID: \"2f6ee53b-674f-488b-973a-280260cc7419\") " pod="kube-system/cilium-wnkcd" Feb 13 15:22:56.003133 sshd[5261]: Accepted publickey for core from 147.75.109.163 port 34424 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:56.008932 sshd-session[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:56.026656 systemd-logind[1921]: New session 30 of user core. Feb 13 15:22:56.033903 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 15:22:56.157818 sshd[5264]: Connection closed by 147.75.109.163 port 34424 Feb 13 15:22:56.158703 sshd-session[5261]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:56.165137 systemd[1]: sshd@29-172.31.28.163:22-147.75.109.163:34424.service: Deactivated successfully. Feb 13 15:22:56.169140 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 15:22:56.170372 systemd-logind[1921]: Session 30 logged out. Waiting for processes to exit. Feb 13 15:22:56.172802 systemd-logind[1921]: Removed session 30. Feb 13 15:22:56.196087 systemd[1]: Started sshd@30-172.31.28.163:22-147.75.109.163:34434.service - OpenSSH per-connection server daemon (147.75.109.163:34434). Feb 13 15:22:56.392005 sshd[5270]: Accepted publickey for core from 147.75.109.163 port 34434 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:56.395169 sshd-session[5270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:56.405988 systemd-logind[1921]: New session 31 of user core. Feb 13 15:22:56.413883 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 15:22:56.814693 containerd[1940]: time="2025-02-13T15:22:56.814528190Z" level=info msg="StopPodSandbox for \"353d2a36b36e08459a22afa7e4a7e3b76c86908cf8ddbebc6249087aed9c36c5\"" Feb 13 15:22:56.815271 containerd[1940]: time="2025-02-13T15:22:56.814692362Z" level=info msg="TearDown network for sandbox \"353d2a36b36e08459a22afa7e4a7e3b76c86908cf8ddbebc6249087aed9c36c5\" successfully" Feb 13 15:22:56.815271 containerd[1940]: time="2025-02-13T15:22:56.814718750Z" level=info msg="StopPodSandbox for \"353d2a36b36e08459a22afa7e4a7e3b76c86908cf8ddbebc6249087aed9c36c5\" returns successfully" Feb 13 15:22:56.816340 containerd[1940]: time="2025-02-13T15:22:56.816243506Z" level=info msg="RemovePodSandbox for \"353d2a36b36e08459a22afa7e4a7e3b76c86908cf8ddbebc6249087aed9c36c5\"" Feb 13 15:22:56.816340 containerd[1940]: time="2025-02-13T15:22:56.816298646Z" level=info msg="Forcibly stopping sandbox \"353d2a36b36e08459a22afa7e4a7e3b76c86908cf8ddbebc6249087aed9c36c5\"" Feb 13 15:22:56.816625 containerd[1940]: time="2025-02-13T15:22:56.816402482Z" level=info msg="TearDown network for sandbox \"353d2a36b36e08459a22afa7e4a7e3b76c86908cf8ddbebc6249087aed9c36c5\" successfully" Feb 13 15:22:56.827201 containerd[1940]: time="2025-02-13T15:22:56.826993106Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"353d2a36b36e08459a22afa7e4a7e3b76c86908cf8ddbebc6249087aed9c36c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:56.827364 containerd[1940]: time="2025-02-13T15:22:56.827282018Z" level=info msg="RemovePodSandbox \"353d2a36b36e08459a22afa7e4a7e3b76c86908cf8ddbebc6249087aed9c36c5\" returns successfully" Feb 13 15:22:56.830181 containerd[1940]: time="2025-02-13T15:22:56.829316954Z" level=info msg="StopPodSandbox for \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\"" Feb 13 15:22:56.830181 containerd[1940]: time="2025-02-13T15:22:56.829615934Z" level=info msg="TearDown network for sandbox \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\" successfully" Feb 13 15:22:56.830181 containerd[1940]: time="2025-02-13T15:22:56.829671146Z" level=info msg="StopPodSandbox for \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\" returns successfully" Feb 13 15:22:56.830613 containerd[1940]: time="2025-02-13T15:22:56.830511230Z" level=info msg="RemovePodSandbox for \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\"" Feb 13 15:22:56.830696 containerd[1940]: time="2025-02-13T15:22:56.830629262Z" level=info msg="Forcibly stopping sandbox \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\"" Feb 13 15:22:56.831139 containerd[1940]: time="2025-02-13T15:22:56.830824958Z" level=info msg="TearDown network for sandbox \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\" successfully" Feb 13 15:22:56.839274 containerd[1940]: time="2025-02-13T15:22:56.839133194Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:56.839274 containerd[1940]: time="2025-02-13T15:22:56.839227178Z" level=info msg="RemovePodSandbox \"2c23adf0ce8146699543f8de911d4dcf7c489ce260f57e37b5e671b1d0944e79\" returns successfully" Feb 13 15:22:56.925452 kubelet[3461]: E0213 15:22:56.924216 3461 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 13 15:22:56.925452 kubelet[3461]: E0213 15:22:56.924300 3461 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-wnkcd: failed to sync secret cache: timed out waiting for the condition Feb 13 15:22:56.925452 kubelet[3461]: E0213 15:22:56.924437 3461 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2f6ee53b-674f-488b-973a-280260cc7419-hubble-tls podName:2f6ee53b-674f-488b-973a-280260cc7419 nodeName:}" failed. No retries permitted until 2025-02-13 15:22:57.424393471 +0000 UTC m=+120.892290391 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/2f6ee53b-674f-488b-973a-280260cc7419-hubble-tls") pod "cilium-wnkcd" (UID: "2f6ee53b-674f-488b-973a-280260cc7419") : failed to sync secret cache: timed out waiting for the condition Feb 13 15:22:56.925452 kubelet[3461]: E0213 15:22:56.924933 3461 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Feb 13 15:22:56.925452 kubelet[3461]: E0213 15:22:56.925015 3461 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f6ee53b-674f-488b-973a-280260cc7419-clustermesh-secrets podName:2f6ee53b-674f-488b-973a-280260cc7419 nodeName:}" failed. No retries permitted until 2025-02-13 15:22:57.424995331 +0000 UTC m=+120.892892251 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/2f6ee53b-674f-488b-973a-280260cc7419-clustermesh-secrets") pod "cilium-wnkcd" (UID: "2f6ee53b-674f-488b-973a-280260cc7419") : failed to sync secret cache: timed out waiting for the condition Feb 13 15:22:56.925452 kubelet[3461]: E0213 15:22:56.925317 3461 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Feb 13 15:22:56.926650 kubelet[3461]: E0213 15:22:56.925395 3461 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f6ee53b-674f-488b-973a-280260cc7419-cilium-ipsec-secrets podName:2f6ee53b-674f-488b-973a-280260cc7419 nodeName:}" failed. No retries permitted until 2025-02-13 15:22:57.425374087 +0000 UTC m=+120.893271007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/2f6ee53b-674f-488b-973a-280260cc7419-cilium-ipsec-secrets") pod "cilium-wnkcd" (UID: "2f6ee53b-674f-488b-973a-280260cc7419") : failed to sync secret cache: timed out waiting for the condition Feb 13 15:22:57.087732 kubelet[3461]: E0213 15:22:57.087513 3461 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:22:57.600042 containerd[1940]: time="2025-02-13T15:22:57.599922182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wnkcd,Uid:2f6ee53b-674f-488b-973a-280260cc7419,Namespace:kube-system,Attempt:0,}" Feb 13 15:22:57.649869 containerd[1940]: time="2025-02-13T15:22:57.648531338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:22:57.650863 containerd[1940]: time="2025-02-13T15:22:57.649438479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:22:57.650863 containerd[1940]: time="2025-02-13T15:22:57.650592735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:22:57.651532 containerd[1940]: time="2025-02-13T15:22:57.650921127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:22:57.713897 systemd[1]: Started cri-containerd-68dc66d49154f62c918672b90b69650f3405bdfc1c1d7aa920ea31acd592d7f8.scope - libcontainer container 68dc66d49154f62c918672b90b69650f3405bdfc1c1d7aa920ea31acd592d7f8. Feb 13 15:22:57.764212 containerd[1940]: time="2025-02-13T15:22:57.764148999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wnkcd,Uid:2f6ee53b-674f-488b-973a-280260cc7419,Namespace:kube-system,Attempt:0,} returns sandbox id \"68dc66d49154f62c918672b90b69650f3405bdfc1c1d7aa920ea31acd592d7f8\"" Feb 13 15:22:57.772103 containerd[1940]: time="2025-02-13T15:22:57.772013703Z" level=info msg="CreateContainer within sandbox \"68dc66d49154f62c918672b90b69650f3405bdfc1c1d7aa920ea31acd592d7f8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:22:57.798194 containerd[1940]: time="2025-02-13T15:22:57.798048747Z" level=info msg="CreateContainer within sandbox \"68dc66d49154f62c918672b90b69650f3405bdfc1c1d7aa920ea31acd592d7f8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"59e6762234406996e7f308757f18381f8b8de3b1c0849daf2899fb1a05bc1f1f\"" Feb 13 15:22:57.799626 containerd[1940]: time="2025-02-13T15:22:57.799383807Z" level=info msg="StartContainer for \"59e6762234406996e7f308757f18381f8b8de3b1c0849daf2899fb1a05bc1f1f\"" Feb 13 15:22:57.844848 systemd[1]: Started cri-containerd-59e6762234406996e7f308757f18381f8b8de3b1c0849daf2899fb1a05bc1f1f.scope - libcontainer container 59e6762234406996e7f308757f18381f8b8de3b1c0849daf2899fb1a05bc1f1f. Feb 13 15:22:57.898343 containerd[1940]: time="2025-02-13T15:22:57.898138768Z" level=info msg="StartContainer for \"59e6762234406996e7f308757f18381f8b8de3b1c0849daf2899fb1a05bc1f1f\" returns successfully" Feb 13 15:22:57.916033 systemd[1]: cri-containerd-59e6762234406996e7f308757f18381f8b8de3b1c0849daf2899fb1a05bc1f1f.scope: Deactivated successfully. Feb 13 15:22:57.981013 containerd[1940]: time="2025-02-13T15:22:57.980891344Z" level=info msg="shim disconnected" id=59e6762234406996e7f308757f18381f8b8de3b1c0849daf2899fb1a05bc1f1f namespace=k8s.io Feb 13 15:22:57.981271 containerd[1940]: time="2025-02-13T15:22:57.981010456Z" level=warning msg="cleaning up after shim disconnected" id=59e6762234406996e7f308757f18381f8b8de3b1c0849daf2899fb1a05bc1f1f namespace=k8s.io Feb 13 15:22:57.981271 containerd[1940]: time="2025-02-13T15:22:57.981065452Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:22:58.479234 containerd[1940]: time="2025-02-13T15:22:58.479156763Z" level=info msg="CreateContainer within sandbox \"68dc66d49154f62c918672b90b69650f3405bdfc1c1d7aa920ea31acd592d7f8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:22:58.516326 containerd[1940]: time="2025-02-13T15:22:58.515108787Z" level=info msg="CreateContainer within sandbox \"68dc66d49154f62c918672b90b69650f3405bdfc1c1d7aa920ea31acd592d7f8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dfd5b3b519a2b28e7d7b8323c5b2059d830046064ca5e51e20178c5e18c73537\"" Feb 13 15:22:58.518326 containerd[1940]: time="2025-02-13T15:22:58.518244303Z" level=info msg="StartContainer for \"dfd5b3b519a2b28e7d7b8323c5b2059d830046064ca5e51e20178c5e18c73537\"" Feb 13 15:22:58.590926 systemd[1]: Started cri-containerd-dfd5b3b519a2b28e7d7b8323c5b2059d830046064ca5e51e20178c5e18c73537.scope - libcontainer container dfd5b3b519a2b28e7d7b8323c5b2059d830046064ca5e51e20178c5e18c73537. Feb 13 15:22:58.648583 containerd[1940]: time="2025-02-13T15:22:58.647952291Z" level=info msg="StartContainer for \"dfd5b3b519a2b28e7d7b8323c5b2059d830046064ca5e51e20178c5e18c73537\" returns successfully" Feb 13 15:22:58.660148 systemd[1]: cri-containerd-dfd5b3b519a2b28e7d7b8323c5b2059d830046064ca5e51e20178c5e18c73537.scope: Deactivated successfully. Feb 13 15:22:58.723315 containerd[1940]: time="2025-02-13T15:22:58.723182920Z" level=info msg="shim disconnected" id=dfd5b3b519a2b28e7d7b8323c5b2059d830046064ca5e51e20178c5e18c73537 namespace=k8s.io Feb 13 15:22:58.723619 containerd[1940]: time="2025-02-13T15:22:58.723344092Z" level=warning msg="cleaning up after shim disconnected" id=dfd5b3b519a2b28e7d7b8323c5b2059d830046064ca5e51e20178c5e18c73537 namespace=k8s.io Feb 13 15:22:58.723619 containerd[1940]: time="2025-02-13T15:22:58.723379456Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:22:59.452025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfd5b3b519a2b28e7d7b8323c5b2059d830046064ca5e51e20178c5e18c73537-rootfs.mount: Deactivated successfully. Feb 13 15:22:59.495176 containerd[1940]: time="2025-02-13T15:22:59.494372812Z" level=info msg="CreateContainer within sandbox \"68dc66d49154f62c918672b90b69650f3405bdfc1c1d7aa920ea31acd592d7f8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:22:59.533199 containerd[1940]: time="2025-02-13T15:22:59.531661876Z" level=info msg="CreateContainer within sandbox \"68dc66d49154f62c918672b90b69650f3405bdfc1c1d7aa920ea31acd592d7f8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6888e54a16bbd6a5e5421e04c3307187422cb6b92798b636ebab5e623e0e7482\"" Feb 13 15:22:59.534345 containerd[1940]: time="2025-02-13T15:22:59.534264208Z" level=info msg="StartContainer for \"6888e54a16bbd6a5e5421e04c3307187422cb6b92798b636ebab5e623e0e7482\"" Feb 13 15:22:59.544789 kubelet[3461]: I0213 15:22:59.540930 3461 setters.go:580] "Node became not ready" node="ip-172-31-28-163" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:22:59Z","lastTransitionTime":"2025-02-13T15:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:22:59.638923 systemd[1]: Started cri-containerd-6888e54a16bbd6a5e5421e04c3307187422cb6b92798b636ebab5e623e0e7482.scope - libcontainer container 6888e54a16bbd6a5e5421e04c3307187422cb6b92798b636ebab5e623e0e7482. Feb 13 15:22:59.708706 containerd[1940]: time="2025-02-13T15:22:59.707104613Z" level=info msg="StartContainer for \"6888e54a16bbd6a5e5421e04c3307187422cb6b92798b636ebab5e623e0e7482\" returns successfully" Feb 13 15:22:59.718903 systemd[1]: cri-containerd-6888e54a16bbd6a5e5421e04c3307187422cb6b92798b636ebab5e623e0e7482.scope: Deactivated successfully. Feb 13 15:22:59.770600 containerd[1940]: time="2025-02-13T15:22:59.770493797Z" level=info msg="shim disconnected" id=6888e54a16bbd6a5e5421e04c3307187422cb6b92798b636ebab5e623e0e7482 namespace=k8s.io Feb 13 15:22:59.771100 containerd[1940]: time="2025-02-13T15:22:59.770925089Z" level=warning msg="cleaning up after shim disconnected" id=6888e54a16bbd6a5e5421e04c3307187422cb6b92798b636ebab5e623e0e7482 namespace=k8s.io Feb 13 15:22:59.771100 containerd[1940]: time="2025-02-13T15:22:59.770955629Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:00.450859 systemd[1]: run-containerd-runc-k8s.io-6888e54a16bbd6a5e5421e04c3307187422cb6b92798b636ebab5e623e0e7482-runc.0UruXP.mount: Deactivated successfully. Feb 13 15:23:00.451286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6888e54a16bbd6a5e5421e04c3307187422cb6b92798b636ebab5e623e0e7482-rootfs.mount: Deactivated successfully. Feb 13 15:23:00.500644 containerd[1940]: time="2025-02-13T15:23:00.499629533Z" level=info msg="CreateContainer within sandbox \"68dc66d49154f62c918672b90b69650f3405bdfc1c1d7aa920ea31acd592d7f8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:23:00.547542 containerd[1940]: time="2025-02-13T15:23:00.547272653Z" level=info msg="CreateContainer within sandbox \"68dc66d49154f62c918672b90b69650f3405bdfc1c1d7aa920ea31acd592d7f8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"77bab741425f79928fdcb294e6256679732a9038a2da7da77f2330e0bd554910\"" Feb 13 15:23:00.548280 containerd[1940]: time="2025-02-13T15:23:00.548198561Z" level=info msg="StartContainer for \"77bab741425f79928fdcb294e6256679732a9038a2da7da77f2330e0bd554910\"" Feb 13 15:23:00.616871 systemd[1]: Started cri-containerd-77bab741425f79928fdcb294e6256679732a9038a2da7da77f2330e0bd554910.scope - libcontainer container 77bab741425f79928fdcb294e6256679732a9038a2da7da77f2330e0bd554910. Feb 13 15:23:00.665609 systemd[1]: cri-containerd-77bab741425f79928fdcb294e6256679732a9038a2da7da77f2330e0bd554910.scope: Deactivated successfully. Feb 13 15:23:00.675455 containerd[1940]: time="2025-02-13T15:23:00.675385386Z" level=info msg="StartContainer for \"77bab741425f79928fdcb294e6256679732a9038a2da7da77f2330e0bd554910\" returns successfully" Feb 13 15:23:00.725682 containerd[1940]: time="2025-02-13T15:23:00.725361258Z" level=info msg="shim disconnected" id=77bab741425f79928fdcb294e6256679732a9038a2da7da77f2330e0bd554910 namespace=k8s.io Feb 13 15:23:00.725682 containerd[1940]: time="2025-02-13T15:23:00.725458494Z" level=warning msg="cleaning up after shim disconnected" id=77bab741425f79928fdcb294e6256679732a9038a2da7da77f2330e0bd554910 namespace=k8s.io Feb 13 15:23:00.725682 containerd[1940]: time="2025-02-13T15:23:00.725480202Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:01.451240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77bab741425f79928fdcb294e6256679732a9038a2da7da77f2330e0bd554910-rootfs.mount: Deactivated successfully. Feb 13 15:23:01.510311 containerd[1940]: time="2025-02-13T15:23:01.509766438Z" level=info msg="CreateContainer within sandbox \"68dc66d49154f62c918672b90b69650f3405bdfc1c1d7aa920ea31acd592d7f8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:23:01.548165 containerd[1940]: time="2025-02-13T15:23:01.548000334Z" level=info msg="CreateContainer within sandbox \"68dc66d49154f62c918672b90b69650f3405bdfc1c1d7aa920ea31acd592d7f8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9a4823d0a3d13ba101f3f7f7eb606b4013a9b76ac8821b2f39884673cf102bed\"" Feb 13 15:23:01.549010 containerd[1940]: time="2025-02-13T15:23:01.548877546Z" level=info msg="StartContainer for \"9a4823d0a3d13ba101f3f7f7eb606b4013a9b76ac8821b2f39884673cf102bed\"" Feb 13 15:23:01.618912 systemd[1]: Started cri-containerd-9a4823d0a3d13ba101f3f7f7eb606b4013a9b76ac8821b2f39884673cf102bed.scope - libcontainer container 9a4823d0a3d13ba101f3f7f7eb606b4013a9b76ac8821b2f39884673cf102bed. Feb 13 15:23:01.683856 containerd[1940]: time="2025-02-13T15:23:01.683786947Z" level=info msg="StartContainer for \"9a4823d0a3d13ba101f3f7f7eb606b4013a9b76ac8821b2f39884673cf102bed\" returns successfully" Feb 13 15:23:02.543717 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 15:23:05.226682 systemd[1]: run-containerd-runc-k8s.io-9a4823d0a3d13ba101f3f7f7eb606b4013a9b76ac8821b2f39884673cf102bed-runc.vkNrTR.mount: Deactivated successfully. Feb 13 15:23:05.368163 kubelet[3461]: E0213 15:23:05.367976 3461 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:58266->127.0.0.1:41241: write tcp 127.0.0.1:58266->127.0.0.1:41241: write: connection reset by peer Feb 13 15:23:07.133732 (udev-worker)[6109]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:23:07.137159 (udev-worker)[6112]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:23:07.177444 systemd-networkd[1836]: lxc_health: Link UP Feb 13 15:23:07.207154 systemd-networkd[1836]: lxc_health: Gained carrier Feb 13 15:23:07.595992 systemd[1]: run-containerd-runc-k8s.io-9a4823d0a3d13ba101f3f7f7eb606b4013a9b76ac8821b2f39884673cf102bed-runc.gguZmj.mount: Deactivated successfully. Feb 13 15:23:07.693221 kubelet[3461]: I0213 15:23:07.693114 3461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wnkcd" podStartSLOduration=12.693093708 podStartE2EDuration="12.693093708s" podCreationTimestamp="2025-02-13 15:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:23:02.566283871 +0000 UTC m=+126.034180827" watchObservedRunningTime="2025-02-13 15:23:07.693093708 +0000 UTC m=+131.160990652" Feb 13 15:23:07.872365 kubelet[3461]: E0213 15:23:07.871642 3461 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:58282->127.0.0.1:41241: read tcp 127.0.0.1:58282->127.0.0.1:41241: read: connection reset by peer Feb 13 15:23:07.873738 kubelet[3461]: E0213 15:23:07.873683 3461 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:58282->127.0.0.1:41241: write tcp 127.0.0.1:58282->127.0.0.1:41241: write: broken pipe Feb 13 15:23:09.124417 systemd-networkd[1836]: lxc_health: Gained IPv6LL Feb 13 15:23:11.152516 ntpd[1914]: Listen normally on 15 lxc_health [fe80::cf:2bff:fe45:a8d0%14]:123 Feb 13 15:23:11.153236 ntpd[1914]: 13 Feb 15:23:11 ntpd[1914]: Listen normally on 15 lxc_health [fe80::cf:2bff:fe45:a8d0%14]:123 Feb 13 15:23:12.536221 sshd[5272]: Connection closed by 147.75.109.163 port 34434 Feb 13 15:23:12.537247 sshd-session[5270]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:12.544669 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 15:23:12.548978 systemd[1]: sshd@30-172.31.28.163:22-147.75.109.163:34434.service: Deactivated successfully. Feb 13 15:23:12.563701 systemd-logind[1921]: Session 31 logged out. Waiting for processes to exit. Feb 13 15:23:12.567159 systemd-logind[1921]: Removed session 31.