Feb 13 15:16:45.188592 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 15:16:45.188636 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 13:57:00 -00 2025 Feb 13 15:16:45.188659 kernel: KASLR disabled due to lack of seed Feb 13 15:16:45.188676 kernel: efi: EFI v2.7 by EDK II Feb 13 15:16:45.188691 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Feb 13 15:16:45.188753 kernel: secureboot: Secure boot disabled Feb 13 15:16:45.188773 kernel: ACPI: Early table checksum verification disabled Feb 13 15:16:45.188789 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 15:16:45.188806 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 15:16:45.188821 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 15:16:45.188844 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 15:16:45.188943 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 15:16:45.189181 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 15:16:45.189201 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 15:16:45.189219 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 15:16:45.189241 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 15:16:45.189258 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 15:16:45.189274 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 15:16:45.189290 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 15:16:45.189306 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 15:16:45.189323 kernel: printk: bootconsole [uart0] enabled Feb 13 15:16:45.189339 kernel: NUMA: Failed to initialise from firmware Feb 13 15:16:45.189355 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 15:16:45.189371 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 15:16:45.189387 kernel: Zone ranges: Feb 13 15:16:45.189403 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 15:16:45.189423 kernel: DMA32 empty Feb 13 15:16:45.189440 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 15:16:45.189456 kernel: Movable zone start for each node Feb 13 15:16:45.189472 kernel: Early memory node ranges Feb 13 15:16:45.189488 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 15:16:45.189504 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 15:16:45.189520 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 15:16:45.189537 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 15:16:45.189553 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 15:16:45.189569 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 15:16:45.189585 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 15:16:45.189601 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 15:16:45.189621 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 15:16:45.189638 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 15:16:45.189661 kernel: psci: probing for conduit method from ACPI. Feb 13 15:16:45.189678 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 15:16:45.189805 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:16:45.189834 kernel: psci: Trusted OS migration not required Feb 13 15:16:45.189852 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:16:45.189869 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:16:45.189887 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:16:45.189904 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:16:45.189921 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:16:45.189938 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:16:45.189956 kernel: CPU features: detected: Spectre-v2 Feb 13 15:16:45.189973 kernel: CPU features: detected: Spectre-v3a Feb 13 15:16:45.189989 kernel: CPU features: detected: Spectre-BHB Feb 13 15:16:45.190007 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 15:16:45.190024 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 15:16:45.190045 kernel: alternatives: applying boot alternatives Feb 13 15:16:45.190064 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:16:45.190083 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:16:45.190100 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:16:45.190118 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:16:45.190135 kernel: Fallback order for Node 0: 0 Feb 13 15:16:45.190152 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 15:16:45.190169 kernel: Policy zone: Normal Feb 13 15:16:45.190186 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:16:45.190203 kernel: software IO TLB: area num 2. Feb 13 15:16:45.190224 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 15:16:45.190242 kernel: Memory: 3819960K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 210504K reserved, 0K cma-reserved) Feb 13 15:16:45.190260 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:16:45.190277 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:16:45.190295 kernel: rcu: RCU event tracing is enabled. Feb 13 15:16:45.190312 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:16:45.190330 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:16:45.190347 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:16:45.190365 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:16:45.190382 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:16:45.190399 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:16:45.190420 kernel: GICv3: 96 SPIs implemented Feb 13 15:16:45.190437 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:16:45.190454 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:16:45.190471 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 15:16:45.190488 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 15:16:45.190504 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 15:16:45.190522 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:16:45.190539 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:16:45.190556 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 15:16:45.190573 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 15:16:45.190590 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 15:16:45.190608 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:16:45.190629 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 15:16:45.190646 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 15:16:45.190663 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 15:16:45.190680 kernel: Console: colour dummy device 80x25 Feb 13 15:16:45.190744 kernel: printk: console [tty1] enabled Feb 13 15:16:45.190804 kernel: ACPI: Core revision 20230628 Feb 13 15:16:45.190827 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 15:16:45.190845 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:16:45.190863 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:16:45.190880 kernel: landlock: Up and running. Feb 13 15:16:45.190904 kernel: SELinux: Initializing. Feb 13 15:16:45.190921 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:16:45.190939 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:16:45.190957 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:16:45.190974 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:16:45.190992 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:16:45.191010 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:16:45.191027 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 15:16:45.191048 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 15:16:45.191066 kernel: Remapping and enabling EFI services. Feb 13 15:16:45.191083 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:16:45.191100 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:16:45.191118 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 15:16:45.191136 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 15:16:45.191153 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 15:16:45.191172 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:16:45.191189 kernel: SMP: Total of 2 processors activated. Feb 13 15:16:45.191206 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:16:45.191228 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 15:16:45.191245 kernel: CPU features: detected: CRC32 instructions Feb 13 15:16:45.191274 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:16:45.191296 kernel: alternatives: applying system-wide alternatives Feb 13 15:16:45.191315 kernel: devtmpfs: initialized Feb 13 15:16:45.191334 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:16:45.191352 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:16:45.191370 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:16:45.191388 kernel: SMBIOS 3.0.0 present. Feb 13 15:16:45.191410 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 15:16:45.191429 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:16:45.191448 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:16:45.191466 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:16:45.191485 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:16:45.191503 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:16:45.191521 kernel: audit: type=2000 audit(0.220:1): state=initialized audit_enabled=0 res=1 Feb 13 15:16:45.191543 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:16:45.191561 kernel: cpuidle: using governor menu Feb 13 15:16:45.191579 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:16:45.191597 kernel: ASID allocator initialised with 65536 entries Feb 13 15:16:45.191615 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:16:45.191633 kernel: Serial: AMBA PL011 UART driver Feb 13 15:16:45.191669 kernel: Modules: 17440 pages in range for non-PLT usage Feb 13 15:16:45.191688 kernel: Modules: 508960 pages in range for PLT usage Feb 13 15:16:45.191744 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:16:45.191771 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:16:45.191790 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:16:45.191808 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:16:45.191826 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:16:45.191844 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:16:45.191862 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:16:45.191880 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:16:45.191898 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:16:45.191916 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:16:45.191938 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:16:45.191956 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:16:45.191975 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:16:45.191992 kernel: ACPI: Interpreter enabled Feb 13 15:16:45.192014 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:16:45.192031 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:16:45.192050 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 15:16:45.192351 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:16:45.192563 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:16:45.192799 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:16:45.193003 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 15:16:45.193201 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 15:16:45.193227 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 15:16:45.193246 kernel: acpiphp: Slot [1] registered Feb 13 15:16:45.193264 kernel: acpiphp: Slot [2] registered Feb 13 15:16:45.193282 kernel: acpiphp: Slot [3] registered Feb 13 15:16:45.193308 kernel: acpiphp: Slot [4] registered Feb 13 15:16:45.193326 kernel: acpiphp: Slot [5] registered Feb 13 15:16:45.193344 kernel: acpiphp: Slot [6] registered Feb 13 15:16:45.193362 kernel: acpiphp: Slot [7] registered Feb 13 15:16:45.193380 kernel: acpiphp: Slot [8] registered Feb 13 15:16:45.193398 kernel: acpiphp: Slot [9] registered Feb 13 15:16:45.193417 kernel: acpiphp: Slot [10] registered Feb 13 15:16:45.193435 kernel: acpiphp: Slot [11] registered Feb 13 15:16:45.193453 kernel: acpiphp: Slot [12] registered Feb 13 15:16:45.193471 kernel: acpiphp: Slot [13] registered Feb 13 15:16:45.193493 kernel: acpiphp: Slot [14] registered Feb 13 15:16:45.193511 kernel: acpiphp: Slot [15] registered Feb 13 15:16:45.193530 kernel: acpiphp: Slot [16] registered Feb 13 15:16:45.193548 kernel: acpiphp: Slot [17] registered Feb 13 15:16:45.193566 kernel: acpiphp: Slot [18] registered Feb 13 15:16:45.193584 kernel: acpiphp: Slot [19] registered Feb 13 15:16:45.193602 kernel: acpiphp: Slot [20] registered Feb 13 15:16:45.193621 kernel: acpiphp: Slot [21] registered Feb 13 15:16:45.193638 kernel: acpiphp: Slot [22] registered Feb 13 15:16:45.193660 kernel: acpiphp: Slot [23] registered Feb 13 15:16:45.193678 kernel: acpiphp: Slot [24] registered Feb 13 15:16:45.193715 kernel: acpiphp: Slot [25] registered Feb 13 15:16:45.193738 kernel: acpiphp: Slot [26] registered Feb 13 15:16:45.193757 kernel: acpiphp: Slot [27] registered Feb 13 15:16:45.193775 kernel: acpiphp: Slot [28] registered Feb 13 15:16:45.193794 kernel: acpiphp: Slot [29] registered Feb 13 15:16:45.193812 kernel: acpiphp: Slot [30] registered Feb 13 15:16:45.193830 kernel: acpiphp: Slot [31] registered Feb 13 15:16:45.193856 kernel: PCI host bridge to bus 0000:00 Feb 13 15:16:45.194086 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 15:16:45.194273 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:16:45.194457 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 15:16:45.194638 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 15:16:45.194922 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 15:16:45.195147 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 15:16:45.195368 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 15:16:45.195595 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 15:16:45.195873 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 15:16:45.196087 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 15:16:45.196316 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 15:16:45.196523 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 15:16:45.196819 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 15:16:45.197045 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 15:16:45.197245 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 15:16:45.197443 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 15:16:45.197641 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 15:16:45.197888 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 15:16:45.198089 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 15:16:45.198291 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 15:16:45.198484 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 15:16:45.198666 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:16:45.198895 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 15:16:45.198921 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:16:45.198940 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:16:45.198959 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:16:45.198977 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:16:45.198996 kernel: iommu: Default domain type: Translated Feb 13 15:16:45.199020 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:16:45.199039 kernel: efivars: Registered efivars operations Feb 13 15:16:45.199057 kernel: vgaarb: loaded Feb 13 15:16:45.199075 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:16:45.199093 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:16:45.199112 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:16:45.199130 kernel: pnp: PnP ACPI init Feb 13 15:16:45.199337 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 15:16:45.199369 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:16:45.199388 kernel: NET: Registered PF_INET protocol family Feb 13 15:16:45.199407 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:16:45.199425 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:16:45.199444 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:16:45.199463 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:16:45.199481 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:16:45.199499 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:16:45.199517 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:16:45.199540 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:16:45.199558 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:16:45.199577 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:16:45.199595 kernel: kvm [1]: HYP mode not available Feb 13 15:16:45.199613 kernel: Initialise system trusted keyrings Feb 13 15:16:45.199631 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:16:45.199667 kernel: Key type asymmetric registered Feb 13 15:16:45.199688 kernel: Asymmetric key parser 'x509' registered Feb 13 15:16:45.199763 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:16:45.199790 kernel: io scheduler mq-deadline registered Feb 13 15:16:45.199809 kernel: io scheduler kyber registered Feb 13 15:16:45.199828 kernel: io scheduler bfq registered Feb 13 15:16:45.200224 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 15:16:45.200252 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:16:45.200271 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:16:45.200290 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 15:16:45.200308 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 15:16:45.200332 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:16:45.200352 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 15:16:45.200549 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 15:16:45.200574 kernel: printk: console [ttyS0] disabled Feb 13 15:16:45.200593 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 15:16:45.200612 kernel: printk: console [ttyS0] enabled Feb 13 15:16:45.200631 kernel: printk: bootconsole [uart0] disabled Feb 13 15:16:45.200649 kernel: thunder_xcv, ver 1.0 Feb 13 15:16:45.200667 kernel: thunder_bgx, ver 1.0 Feb 13 15:16:45.200685 kernel: nicpf, ver 1.0 Feb 13 15:16:45.201252 kernel: nicvf, ver 1.0 Feb 13 15:16:45.201476 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:16:45.201665 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:16:44 UTC (1739459804) Feb 13 15:16:45.201690 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:16:45.201836 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 15:16:45.201861 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:16:45.201880 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:16:45.201906 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:16:45.201925 kernel: Segment Routing with IPv6 Feb 13 15:16:45.201943 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:16:45.201962 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:16:45.201980 kernel: Key type dns_resolver registered Feb 13 15:16:45.201998 kernel: registered taskstats version 1 Feb 13 15:16:45.202017 kernel: Loading compiled-in X.509 certificates Feb 13 15:16:45.202039 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4531cdb19689f90a81e7969ac7d8e25a95254f51' Feb 13 15:16:45.202057 kernel: Key type .fscrypt registered Feb 13 15:16:45.202076 kernel: Key type fscrypt-provisioning registered Feb 13 15:16:45.202099 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:16:45.202118 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:16:45.202137 kernel: ima: No architecture policies found Feb 13 15:16:45.202155 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:16:45.202174 kernel: clk: Disabling unused clocks Feb 13 15:16:45.202192 kernel: Freeing unused kernel memory: 39680K Feb 13 15:16:45.202210 kernel: Run /init as init process Feb 13 15:16:45.202229 kernel: with arguments: Feb 13 15:16:45.202247 kernel: /init Feb 13 15:16:45.202269 kernel: with environment: Feb 13 15:16:45.202287 kernel: HOME=/ Feb 13 15:16:45.202305 kernel: TERM=linux Feb 13 15:16:45.202323 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:16:45.202345 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:16:45.202368 systemd[1]: Detected virtualization amazon. Feb 13 15:16:45.202388 systemd[1]: Detected architecture arm64. Feb 13 15:16:45.202412 systemd[1]: Running in initrd. Feb 13 15:16:45.202432 systemd[1]: No hostname configured, using default hostname. Feb 13 15:16:45.202451 systemd[1]: Hostname set to . Feb 13 15:16:45.202472 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:16:45.202491 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:16:45.202511 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:16:45.202532 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:16:45.202553 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:16:45.202578 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:16:45.202598 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:16:45.202619 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:16:45.202642 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:16:45.202662 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:16:45.202682 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:16:45.202745 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:16:45.202779 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:16:45.202800 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:16:45.202820 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:16:45.202840 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:16:45.202860 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:16:45.202881 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:16:45.202901 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:16:45.202921 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:16:45.202941 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:16:45.202966 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:16:45.202986 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:16:45.203006 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:16:45.203026 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:16:45.203046 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:16:45.203066 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:16:45.203086 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:16:45.203106 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:16:45.203130 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:16:45.203150 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:45.203170 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:16:45.203190 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:16:45.203210 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:16:45.203232 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:16:45.203296 systemd-journald[252]: Collecting audit messages is disabled. Feb 13 15:16:45.203339 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:45.203359 systemd-journald[252]: Journal started Feb 13 15:16:45.203402 systemd-journald[252]: Runtime Journal (/run/log/journal/ec20e534ef3f1e3bcbe24134461d6fec) is 8.0M, max 75.3M, 67.3M free. Feb 13 15:16:45.177876 systemd-modules-load[253]: Inserted module 'overlay' Feb 13 15:16:45.215763 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:16:45.220754 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:16:45.220823 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:16:45.224651 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:16:45.232742 kernel: Bridge firewalling registered Feb 13 15:16:45.232203 systemd-modules-load[253]: Inserted module 'br_netfilter' Feb 13 15:16:45.236499 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:16:45.243063 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:16:45.254037 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:16:45.261026 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:16:45.285185 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:16:45.295863 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:16:45.315390 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:16:45.317851 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:16:45.327839 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:16:45.339987 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:16:45.356848 dracut-cmdline[286]: dracut-dracut-053 Feb 13 15:16:45.363185 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:16:45.418591 systemd-resolved[289]: Positive Trust Anchors: Feb 13 15:16:45.421593 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:16:45.421670 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:16:45.505740 kernel: SCSI subsystem initialized Feb 13 15:16:45.512730 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:16:45.525928 kernel: iscsi: registered transport (tcp) Feb 13 15:16:45.547736 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:16:45.547822 kernel: QLogic iSCSI HBA Driver Feb 13 15:16:45.652740 kernel: random: crng init done Feb 13 15:16:45.652989 systemd-resolved[289]: Defaulting to hostname 'linux'. Feb 13 15:16:45.656437 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:16:45.660528 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:16:45.683451 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:16:45.693054 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:16:45.732472 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:16:45.732546 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:16:45.733731 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:16:45.799761 kernel: raid6: neonx8 gen() 6667 MB/s Feb 13 15:16:45.816731 kernel: raid6: neonx4 gen() 6509 MB/s Feb 13 15:16:45.833731 kernel: raid6: neonx2 gen() 5437 MB/s Feb 13 15:16:45.850731 kernel: raid6: neonx1 gen() 3944 MB/s Feb 13 15:16:45.867732 kernel: raid6: int64x8 gen() 3786 MB/s Feb 13 15:16:45.884731 kernel: raid6: int64x4 gen() 3703 MB/s Feb 13 15:16:45.901731 kernel: raid6: int64x2 gen() 3587 MB/s Feb 13 15:16:45.919499 kernel: raid6: int64x1 gen() 2767 MB/s Feb 13 15:16:45.919539 kernel: raid6: using algorithm neonx8 gen() 6667 MB/s Feb 13 15:16:45.937507 kernel: raid6: .... xor() 4841 MB/s, rmw enabled Feb 13 15:16:45.937547 kernel: raid6: using neon recovery algorithm Feb 13 15:16:45.945867 kernel: xor: measuring software checksum speed Feb 13 15:16:45.945916 kernel: 8regs : 10974 MB/sec Feb 13 15:16:45.946952 kernel: 32regs : 11941 MB/sec Feb 13 15:16:45.948127 kernel: arm64_neon : 9513 MB/sec Feb 13 15:16:45.948159 kernel: xor: using function: 32regs (11941 MB/sec) Feb 13 15:16:46.032749 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:16:46.051555 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:16:46.062004 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:16:46.100379 systemd-udevd[471]: Using default interface naming scheme 'v255'. Feb 13 15:16:46.108657 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:16:46.122989 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:16:46.153668 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Feb 13 15:16:46.208996 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:16:46.218055 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:16:46.330980 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:16:46.357397 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:16:46.404460 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:16:46.421444 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:16:46.440165 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:16:46.444400 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:16:46.468866 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:16:46.516335 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:16:46.538227 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:16:46.538339 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 15:16:46.563960 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 15:16:46.564213 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 15:16:46.564443 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:30:9d:e2:28:e1 Feb 13 15:16:46.548166 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:16:46.548417 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:16:46.589985 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 15:16:46.590039 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 15:16:46.551039 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:16:46.553166 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:16:46.553427 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:46.555688 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:46.565669 (udev-worker)[526]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:16:46.618433 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 15:16:46.618767 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:16:46.618816 kernel: GPT:9289727 != 16777215 Feb 13 15:16:46.618843 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:16:46.618869 kernel: GPT:9289727 != 16777215 Feb 13 15:16:46.618899 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:16:46.618924 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:16:46.567852 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:46.639176 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:46.652071 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:16:46.692835 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:16:46.703066 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (518) Feb 13 15:16:46.742834 kernel: BTRFS: device fsid 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (525) Feb 13 15:16:46.809341 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 15:16:46.841455 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 15:16:46.858462 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:16:46.873328 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 15:16:46.876017 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 15:16:46.903061 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:16:46.912379 disk-uuid[662]: Primary Header is updated. Feb 13 15:16:46.912379 disk-uuid[662]: Secondary Entries is updated. Feb 13 15:16:46.912379 disk-uuid[662]: Secondary Header is updated. Feb 13 15:16:46.920755 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:16:47.936489 disk-uuid[663]: The operation has completed successfully. Feb 13 15:16:47.940857 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:16:48.115033 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:16:48.116868 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:16:48.164010 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:16:48.173599 sh[924]: Success Feb 13 15:16:48.197735 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:16:48.303678 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:16:48.320917 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:16:48.324450 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:16:48.354208 kernel: BTRFS info (device dm-0): first mount of filesystem 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 Feb 13 15:16:48.354270 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:16:48.354296 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:16:48.355926 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:16:48.357115 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:16:48.453736 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:16:48.486434 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:16:48.486950 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:16:48.500088 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:16:48.505414 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:16:48.532794 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:16:48.532866 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:16:48.534274 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:16:48.541744 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:16:48.558885 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:16:48.561221 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:16:48.572218 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:16:48.591313 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:16:48.692765 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:16:48.707076 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:16:48.747794 systemd-networkd[1116]: lo: Link UP Feb 13 15:16:48.747815 systemd-networkd[1116]: lo: Gained carrier Feb 13 15:16:48.750710 systemd-networkd[1116]: Enumeration completed Feb 13 15:16:48.751283 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:16:48.751484 systemd-networkd[1116]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:48.751490 systemd-networkd[1116]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:16:48.755530 systemd[1]: Reached target network.target - Network. Feb 13 15:16:48.759837 systemd-networkd[1116]: eth0: Link UP Feb 13 15:16:48.759845 systemd-networkd[1116]: eth0: Gained carrier Feb 13 15:16:48.759863 systemd-networkd[1116]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:48.790768 systemd-networkd[1116]: eth0: DHCPv4 address 172.31.28.87/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:16:49.050187 ignition[1027]: Ignition 2.20.0 Feb 13 15:16:49.050216 ignition[1027]: Stage: fetch-offline Feb 13 15:16:49.050638 ignition[1027]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:49.050663 ignition[1027]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:16:49.053318 ignition[1027]: Ignition finished successfully Feb 13 15:16:49.060329 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:16:49.081127 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:16:49.102225 ignition[1126]: Ignition 2.20.0 Feb 13 15:16:49.102254 ignition[1126]: Stage: fetch Feb 13 15:16:49.103016 ignition[1126]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:49.103042 ignition[1126]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:16:49.103211 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:16:49.115761 ignition[1126]: PUT result: OK Feb 13 15:16:49.119243 ignition[1126]: parsed url from cmdline: "" Feb 13 15:16:49.119369 ignition[1126]: no config URL provided Feb 13 15:16:49.119391 ignition[1126]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:16:49.119417 ignition[1126]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:16:49.119449 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:16:49.126625 ignition[1126]: PUT result: OK Feb 13 15:16:49.126772 ignition[1126]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 15:16:49.130598 ignition[1126]: GET result: OK Feb 13 15:16:49.131788 ignition[1126]: parsing config with SHA512: 8abdb13de062465c7252036468f43e318a21a844edfe658de5d785d3ddbba846ad7acb77c02a4f55839d149c54aa9e96437b8face6107e827f32f1fa61cccec7 Feb 13 15:16:49.140057 unknown[1126]: fetched base config from "system" Feb 13 15:16:49.140748 ignition[1126]: fetch: fetch complete Feb 13 15:16:49.140078 unknown[1126]: fetched base config from "system" Feb 13 15:16:49.140759 ignition[1126]: fetch: fetch passed Feb 13 15:16:49.140092 unknown[1126]: fetched user config from "aws" Feb 13 15:16:49.140837 ignition[1126]: Ignition finished successfully Feb 13 15:16:49.146522 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:16:49.159046 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:16:49.184883 ignition[1133]: Ignition 2.20.0 Feb 13 15:16:49.184914 ignition[1133]: Stage: kargs Feb 13 15:16:49.185910 ignition[1133]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:49.185936 ignition[1133]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:16:49.186081 ignition[1133]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:16:49.188224 ignition[1133]: PUT result: OK Feb 13 15:16:49.206432 ignition[1133]: kargs: kargs passed Feb 13 15:16:49.207509 ignition[1133]: Ignition finished successfully Feb 13 15:16:49.210743 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:16:49.220998 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:16:49.248714 ignition[1139]: Ignition 2.20.0 Feb 13 15:16:49.248749 ignition[1139]: Stage: disks Feb 13 15:16:49.249832 ignition[1139]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:49.249859 ignition[1139]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:16:49.250031 ignition[1139]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:16:49.251419 ignition[1139]: PUT result: OK Feb 13 15:16:49.261492 ignition[1139]: disks: disks passed Feb 13 15:16:49.261578 ignition[1139]: Ignition finished successfully Feb 13 15:16:49.266317 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:16:49.270886 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:16:49.275683 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:16:49.278017 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:16:49.281922 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:16:49.285499 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:16:49.304077 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:16:49.344591 systemd-fsck[1147]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:16:49.351900 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:16:49.369982 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:16:49.449753 kernel: EXT4-fs (nvme0n1p9): mounted filesystem b8d8a7c2-9667-48db-9266-035fd118dfdf r/w with ordered data mode. Quota mode: none. Feb 13 15:16:49.451148 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:16:49.452037 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:16:49.466678 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:16:49.472973 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:16:49.478059 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:16:49.478156 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:16:49.478209 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:16:49.502072 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:16:49.510054 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1166) Feb 13 15:16:49.513500 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:16:49.513577 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:16:49.514789 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:16:49.513970 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:16:49.528726 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:16:49.531934 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:16:50.005920 initrd-setup-root[1190]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:16:50.014735 initrd-setup-root[1197]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:16:50.034371 initrd-setup-root[1204]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:16:50.042510 initrd-setup-root[1211]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:16:50.353854 systemd-networkd[1116]: eth0: Gained IPv6LL Feb 13 15:16:50.368450 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:16:50.377910 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:16:50.387989 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:16:50.404858 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:16:50.407553 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:16:50.438167 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:16:50.451748 ignition[1279]: INFO : Ignition 2.20.0 Feb 13 15:16:50.451748 ignition[1279]: INFO : Stage: mount Feb 13 15:16:50.451748 ignition[1279]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:50.451748 ignition[1279]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:16:50.451748 ignition[1279]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:16:50.462286 ignition[1279]: INFO : PUT result: OK Feb 13 15:16:50.467137 ignition[1279]: INFO : mount: mount passed Feb 13 15:16:50.467137 ignition[1279]: INFO : Ignition finished successfully Feb 13 15:16:50.469280 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:16:50.481972 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:16:50.512129 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:16:50.536977 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1290) Feb 13 15:16:50.541322 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:16:50.541366 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:16:50.541392 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:16:50.547733 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:16:50.551895 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:16:50.585313 ignition[1307]: INFO : Ignition 2.20.0 Feb 13 15:16:50.585313 ignition[1307]: INFO : Stage: files Feb 13 15:16:50.589077 ignition[1307]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:50.589077 ignition[1307]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:16:50.589077 ignition[1307]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:16:50.595983 ignition[1307]: INFO : PUT result: OK Feb 13 15:16:50.600085 ignition[1307]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:16:50.612620 ignition[1307]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:16:50.612620 ignition[1307]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:16:50.662562 ignition[1307]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:16:50.668158 ignition[1307]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:16:50.671027 unknown[1307]: wrote ssh authorized keys file for user: core Feb 13 15:16:50.674747 ignition[1307]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:16:50.677493 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:16:50.681128 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:16:50.780849 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:16:51.140916 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:16:51.140916 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:16:51.147970 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 15:16:51.399299 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:16:51.543667 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:16:51.543667 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:16:51.550121 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:16:51.550121 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:16:51.550121 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:16:51.550121 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:16:51.550121 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:16:51.550121 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:16:51.571368 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:16:51.575240 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:16:51.575240 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:16:51.575240 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:16:51.586438 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:16:51.586438 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:16:51.586438 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 15:16:52.018688 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:16:52.331832 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:16:52.331832 ignition[1307]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:16:52.341116 ignition[1307]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:16:52.344540 ignition[1307]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:16:52.344540 ignition[1307]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:16:52.344540 ignition[1307]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:16:52.344540 ignition[1307]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:16:52.344540 ignition[1307]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:16:52.359833 ignition[1307]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:16:52.359833 ignition[1307]: INFO : files: files passed Feb 13 15:16:52.359833 ignition[1307]: INFO : Ignition finished successfully Feb 13 15:16:52.366764 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:16:52.376027 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:16:52.384886 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:16:52.389205 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:16:52.389398 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:16:52.435056 initrd-setup-root-after-ignition[1336]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:16:52.435056 initrd-setup-root-after-ignition[1336]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:16:52.443608 initrd-setup-root-after-ignition[1340]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:16:52.447058 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:16:52.452365 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:16:52.466934 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:16:52.509605 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:16:52.511779 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:16:52.516966 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:16:52.519237 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:16:52.523210 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:16:52.534077 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:16:52.564793 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:16:52.573007 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:16:52.603529 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:16:52.604179 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:16:52.604531 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:16:52.605473 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:16:52.606292 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:16:52.607603 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:16:52.608237 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:16:52.608875 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:16:52.609464 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:16:52.610341 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:16:52.611023 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:16:52.611531 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:16:52.612153 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:16:52.612749 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:16:52.613323 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:16:52.613834 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:16:52.614115 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:16:52.615015 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:16:52.615431 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:16:52.616207 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:16:52.634087 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:16:52.634315 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:16:52.634533 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:16:52.635545 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:16:52.636111 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:16:52.655444 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:16:52.659864 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:16:52.690928 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:16:52.694537 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:16:52.694924 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:16:52.705069 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:16:52.707164 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:16:52.707447 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:16:52.711162 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:16:52.711389 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:16:52.745206 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:16:52.745398 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:16:52.757814 ignition[1360]: INFO : Ignition 2.20.0 Feb 13 15:16:52.757814 ignition[1360]: INFO : Stage: umount Feb 13 15:16:52.757814 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:52.757814 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:16:52.757814 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:16:52.769070 ignition[1360]: INFO : PUT result: OK Feb 13 15:16:52.773391 ignition[1360]: INFO : umount: umount passed Feb 13 15:16:52.775134 ignition[1360]: INFO : Ignition finished successfully Feb 13 15:16:52.778037 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:16:52.779550 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:16:52.784640 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:16:52.786856 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:16:52.791643 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:16:52.791769 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:16:52.797649 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:16:52.797799 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:16:52.801390 systemd[1]: Stopped target network.target - Network. Feb 13 15:16:52.803111 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:16:52.803293 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:16:52.804869 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:16:52.810016 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:16:52.815662 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:16:52.824932 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:16:52.827375 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:16:52.829231 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:16:52.829311 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:16:52.831197 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:16:52.831264 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:16:52.833278 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:16:52.833369 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:16:52.835526 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:16:52.835663 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:16:52.846362 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:16:52.848484 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:16:52.853897 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:16:52.868948 systemd-networkd[1116]: eth0: DHCPv6 lease lost Feb 13 15:16:52.873176 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:16:52.873619 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:16:52.877819 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:16:52.877895 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:16:52.897952 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:16:52.902801 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:16:52.902922 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:16:52.909424 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:16:52.921087 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:16:52.925627 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:16:52.942071 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:16:52.942516 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:16:52.958933 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:16:52.959969 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:16:52.968987 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:16:52.970160 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:16:52.976370 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:16:52.976492 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:16:52.980436 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:16:52.980505 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:16:52.983807 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:16:52.983902 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:16:52.993979 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:16:52.994074 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:16:52.996426 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:16:52.996505 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:16:52.999055 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:16:52.999133 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:16:53.011992 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:16:53.016865 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:16:53.016990 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:16:53.019487 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:16:53.019575 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:16:53.021662 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:16:53.021756 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:16:53.025942 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:16:53.026023 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:16:53.028341 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:16:53.046611 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:53.065021 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:16:53.067415 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:16:53.071657 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:16:53.083991 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:16:53.110744 systemd[1]: Switching root. Feb 13 15:16:53.155528 systemd-journald[252]: Journal stopped Feb 13 15:16:55.497902 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Feb 13 15:16:55.498031 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:16:55.498076 kernel: SELinux: policy capability open_perms=1 Feb 13 15:16:55.498112 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:16:55.498151 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:16:55.498182 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:16:55.498212 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:16:55.498242 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:16:55.498274 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:16:55.498304 kernel: audit: type=1403 audit(1739459813.699:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:16:55.498344 systemd[1]: Successfully loaded SELinux policy in 71.336ms. Feb 13 15:16:55.498388 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.058ms. Feb 13 15:16:55.498434 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:16:55.498464 systemd[1]: Detected virtualization amazon. Feb 13 15:16:55.498495 systemd[1]: Detected architecture arm64. Feb 13 15:16:55.498526 systemd[1]: Detected first boot. Feb 13 15:16:55.498558 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:16:55.498589 zram_generator::config[1403]: No configuration found. Feb 13 15:16:55.498625 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:16:55.498658 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:16:55.498690 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:16:55.498740 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:16:55.498777 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:16:55.498809 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:16:55.498842 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:16:55.499192 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:16:55.499229 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:16:55.499259 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:16:55.499297 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:16:55.499332 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:16:55.499364 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:16:55.499394 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:16:55.499424 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:16:55.499456 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:16:55.499488 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:16:55.499519 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:16:55.499551 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:16:55.499585 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:16:55.503835 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:16:55.503902 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:16:55.503933 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:16:55.503963 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:16:55.503993 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:16:55.504024 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:16:55.504055 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:16:55.504093 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:16:55.504124 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:16:55.504153 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:16:55.504187 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:16:55.504219 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:16:55.504249 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:16:55.504281 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:16:55.504310 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:16:55.504342 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:16:55.504375 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:16:55.504406 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:16:55.504437 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:16:55.504467 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:16:55.504500 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:16:55.504531 systemd[1]: Reached target machines.target - Containers. Feb 13 15:16:55.504563 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:16:55.504594 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:16:55.504623 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:16:55.504657 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:16:55.504686 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:16:55.504977 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:16:55.505013 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:16:55.505050 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:16:55.505081 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:16:55.505115 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:16:55.505153 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:16:55.505188 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:16:55.505218 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:16:55.505248 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:16:55.505279 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:16:55.505310 kernel: fuse: init (API version 7.39) Feb 13 15:16:55.505340 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:16:55.505371 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:16:55.505401 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:16:55.505430 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:16:55.505465 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:16:55.505494 systemd[1]: Stopped verity-setup.service. Feb 13 15:16:55.505525 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:16:55.505554 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:16:55.505582 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:16:55.505613 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:16:55.505643 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:16:55.505677 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:16:55.507662 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:16:55.507748 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:16:55.507781 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:16:55.507823 kernel: loop: module loaded Feb 13 15:16:55.507852 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:16:55.507881 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:16:55.507916 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:16:55.507947 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:16:55.507976 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:16:55.508004 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:16:55.508033 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:16:55.508061 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:16:55.508090 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:16:55.508126 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:16:55.508158 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:16:55.508187 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:16:55.508263 systemd-journald[1485]: Collecting audit messages is disabled. Feb 13 15:16:55.508311 kernel: ACPI: bus type drm_connector registered Feb 13 15:16:55.508340 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:16:55.508374 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:16:55.508405 systemd-journald[1485]: Journal started Feb 13 15:16:55.508453 systemd-journald[1485]: Runtime Journal (/run/log/journal/ec20e534ef3f1e3bcbe24134461d6fec) is 8.0M, max 75.3M, 67.3M free. Feb 13 15:16:54.883163 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:16:54.941989 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 15:16:55.516834 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:16:54.942786 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:16:55.521944 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:16:55.531026 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:16:55.539048 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:16:55.543680 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:16:55.548128 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:16:55.557785 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:16:55.561909 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:16:55.573214 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:16:55.573312 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:16:55.583175 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:16:55.599437 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:16:55.604780 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:16:55.607572 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:16:55.608610 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:16:55.611199 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:16:55.613738 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:16:55.633901 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:16:55.637857 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:16:55.689801 kernel: loop0: detected capacity change from 0 to 113536 Feb 13 15:16:55.700960 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:16:55.709911 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:16:55.712510 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:16:55.716180 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:16:55.731984 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:16:55.767783 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:16:55.803630 systemd-journald[1485]: Time spent on flushing to /var/log/journal/ec20e534ef3f1e3bcbe24134461d6fec is 82.007ms for 914 entries. Feb 13 15:16:55.803630 systemd-journald[1485]: System Journal (/var/log/journal/ec20e534ef3f1e3bcbe24134461d6fec) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:16:55.910911 systemd-journald[1485]: Received client request to flush runtime journal. Feb 13 15:16:55.910993 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:16:55.911027 kernel: loop1: detected capacity change from 0 to 53784 Feb 13 15:16:55.808542 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:16:55.812285 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:16:55.887513 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:16:55.905174 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:16:55.921200 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:16:55.932281 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:16:55.942100 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:16:55.958746 kernel: loop2: detected capacity change from 0 to 116808 Feb 13 15:16:56.003386 systemd-tmpfiles[1548]: ACLs are not supported, ignoring. Feb 13 15:16:56.003429 systemd-tmpfiles[1548]: ACLs are not supported, ignoring. Feb 13 15:16:56.015956 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:16:56.022315 udevadm[1552]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:16:56.088763 kernel: loop3: detected capacity change from 0 to 194096 Feb 13 15:16:56.211747 kernel: loop4: detected capacity change from 0 to 113536 Feb 13 15:16:56.227317 kernel: loop5: detected capacity change from 0 to 53784 Feb 13 15:16:56.247746 kernel: loop6: detected capacity change from 0 to 116808 Feb 13 15:16:56.259726 kernel: loop7: detected capacity change from 0 to 194096 Feb 13 15:16:56.287407 (sd-merge)[1558]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 15:16:56.289069 (sd-merge)[1558]: Merged extensions into '/usr'. Feb 13 15:16:56.296936 systemd[1]: Reloading requested from client PID 1514 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:16:56.298376 systemd[1]: Reloading... Feb 13 15:16:56.506781 zram_generator::config[1584]: No configuration found. Feb 13 15:16:56.831419 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:16:56.938549 systemd[1]: Reloading finished in 638 ms. Feb 13 15:16:56.975776 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:16:56.979022 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:16:57.000150 systemd[1]: Starting ensure-sysext.service... Feb 13 15:16:57.015815 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:16:57.023057 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:16:57.040555 systemd[1]: Reloading requested from client PID 1636 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:16:57.040582 systemd[1]: Reloading... Feb 13 15:16:57.098109 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:16:57.100158 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:16:57.105366 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:16:57.107871 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Feb 13 15:16:57.108059 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Feb 13 15:16:57.118185 systemd-tmpfiles[1637]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:16:57.118208 systemd-tmpfiles[1637]: Skipping /boot Feb 13 15:16:57.145361 systemd-tmpfiles[1637]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:16:57.145544 systemd-tmpfiles[1637]: Skipping /boot Feb 13 15:16:57.149550 systemd-udevd[1638]: Using default interface naming scheme 'v255'. Feb 13 15:16:57.183463 ldconfig[1507]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:16:57.264066 zram_generator::config[1672]: No configuration found. Feb 13 15:16:57.407354 (udev-worker)[1676]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:16:57.656444 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:16:57.661571 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1723) Feb 13 15:16:57.811925 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:16:57.813050 systemd[1]: Reloading finished in 771 ms. Feb 13 15:16:57.851023 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:16:57.855011 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:16:57.860074 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:16:57.942826 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:16:57.952399 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:16:57.980137 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:16:57.991051 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:16:57.993566 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:16:58.003042 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:16:58.009091 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:16:58.025076 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:16:58.029224 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:16:58.035044 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:16:58.038088 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:16:58.042069 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:16:58.049546 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:16:58.064604 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:16:58.074982 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:16:58.077050 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:16:58.096908 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:16:58.101895 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:58.106434 systemd[1]: Finished ensure-sysext.service. Feb 13 15:16:58.124417 lvm[1836]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:16:58.125613 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:16:58.126029 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:16:58.158058 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:16:58.189953 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:16:58.193194 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:16:58.215013 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:16:58.233860 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:16:58.237321 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:16:58.237657 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:16:58.240749 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:16:58.244431 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:16:58.245825 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:16:58.253976 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:16:58.255830 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:16:58.261391 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:16:58.261579 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:16:58.270138 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:16:58.275204 lvm[1869]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:16:58.285340 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:16:58.301693 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:16:58.306288 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:16:58.321952 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:16:58.334613 augenrules[1883]: No rules Feb 13 15:16:58.338217 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:16:58.339792 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:16:58.359786 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:16:58.378147 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:16:58.470111 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:58.496091 systemd-networkd[1849]: lo: Link UP Feb 13 15:16:58.496114 systemd-networkd[1849]: lo: Gained carrier Feb 13 15:16:58.498847 systemd-networkd[1849]: Enumeration completed Feb 13 15:16:58.499033 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:16:58.502016 systemd-networkd[1849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:58.502039 systemd-networkd[1849]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:16:58.504042 systemd-networkd[1849]: eth0: Link UP Feb 13 15:16:58.504349 systemd-networkd[1849]: eth0: Gained carrier Feb 13 15:16:58.504382 systemd-networkd[1849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:58.516825 systemd-networkd[1849]: eth0: DHCPv4 address 172.31.28.87/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:16:58.519049 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:16:58.530107 systemd-resolved[1853]: Positive Trust Anchors: Feb 13 15:16:58.530171 systemd-resolved[1853]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:16:58.530235 systemd-resolved[1853]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:16:58.539790 systemd-resolved[1853]: Defaulting to hostname 'linux'. Feb 13 15:16:58.542997 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:16:58.545242 systemd[1]: Reached target network.target - Network. Feb 13 15:16:58.546976 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:16:58.549159 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:16:58.551259 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:16:58.553568 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:16:58.556104 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:16:58.558233 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:16:58.560589 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:16:58.562901 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:16:58.562950 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:16:58.564656 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:16:58.567138 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:16:58.571908 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:16:58.587174 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:16:58.590431 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:16:58.592906 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:16:58.594956 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:16:58.597092 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:16:58.597144 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:16:58.604895 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:16:58.615041 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:16:58.622073 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:16:58.630907 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:16:58.641473 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:16:58.644115 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:16:58.655746 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:16:58.665667 jq[1906]: false Feb 13 15:16:58.661640 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 15:16:58.671970 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:16:58.677860 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 15:16:58.690024 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:16:58.699049 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:16:58.730028 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:16:58.732922 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:16:58.736924 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:16:58.753595 extend-filesystems[1907]: Found loop4 Feb 13 15:16:58.780901 extend-filesystems[1907]: Found loop5 Feb 13 15:16:58.780901 extend-filesystems[1907]: Found loop6 Feb 13 15:16:58.780901 extend-filesystems[1907]: Found loop7 Feb 13 15:16:58.780901 extend-filesystems[1907]: Found nvme0n1 Feb 13 15:16:58.780901 extend-filesystems[1907]: Found nvme0n1p1 Feb 13 15:16:58.780901 extend-filesystems[1907]: Found nvme0n1p2 Feb 13 15:16:58.780901 extend-filesystems[1907]: Found nvme0n1p3 Feb 13 15:16:58.780901 extend-filesystems[1907]: Found usr Feb 13 15:16:58.780901 extend-filesystems[1907]: Found nvme0n1p4 Feb 13 15:16:58.780901 extend-filesystems[1907]: Found nvme0n1p6 Feb 13 15:16:58.780901 extend-filesystems[1907]: Found nvme0n1p7 Feb 13 15:16:58.780901 extend-filesystems[1907]: Found nvme0n1p9 Feb 13 15:16:58.780901 extend-filesystems[1907]: Checking size of /dev/nvme0n1p9 Feb 13 15:16:58.765165 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:16:58.771237 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:16:58.780906 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:16:58.797818 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:16:58.832397 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:16:58.834653 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:16:58.851986 dbus-daemon[1905]: [system] SELinux support is enabled Feb 13 15:16:58.860989 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:16:58.877582 extend-filesystems[1907]: Resized partition /dev/nvme0n1p9 Feb 13 15:16:58.876963 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:16:58.871423 dbus-daemon[1905]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1849 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 15:16:58.898093 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:47 UTC 2025 (1): Starting Feb 13 15:16:58.898093 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:16:58.898093 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: ---------------------------------------------------- Feb 13 15:16:58.898093 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:16:58.898093 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:16:58.898093 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: corporation. Support and training for ntp-4 are Feb 13 15:16:58.898093 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: available at https://www.nwtime.org/support Feb 13 15:16:58.898093 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: ---------------------------------------------------- Feb 13 15:16:58.898093 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: proto: precision = 0.096 usec (-23) Feb 13 15:16:58.911821 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 15:16:58.911865 extend-filesystems[1942]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:16:58.877015 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:16:58.875601 ntpd[1909]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:47 UTC 2025 (1): Starting Feb 13 15:16:58.917117 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: basedate set to 2025-02-01 Feb 13 15:16:58.917117 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: gps base set to 2025-02-02 (week 2352) Feb 13 15:16:58.892257 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:16:58.875671 ntpd[1909]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:16:58.892297 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:16:58.875692 ntpd[1909]: ---------------------------------------------------- Feb 13 15:16:58.875734 ntpd[1909]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:16:58.875753 ntpd[1909]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:16:58.875772 ntpd[1909]: corporation. Support and training for ntp-4 are Feb 13 15:16:58.875790 ntpd[1909]: available at https://www.nwtime.org/support Feb 13 15:16:58.875809 ntpd[1909]: ---------------------------------------------------- Feb 13 15:16:58.895887 ntpd[1909]: proto: precision = 0.096 usec (-23) Feb 13 15:16:58.903186 dbus-daemon[1905]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 15:16:58.903763 ntpd[1909]: basedate set to 2025-02-01 Feb 13 15:16:58.903794 ntpd[1909]: gps base set to 2025-02-02 (week 2352) Feb 13 15:16:58.924804 jq[1920]: true Feb 13 15:16:58.921934 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 15:16:58.919109 ntpd[1909]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:16:58.931031 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:16:58.931031 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:16:58.931031 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:16:58.931031 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: Listen normally on 3 eth0 172.31.28.87:123 Feb 13 15:16:58.931031 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: Listen normally on 4 lo [::1]:123 Feb 13 15:16:58.931031 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: bind(21) AF_INET6 fe80::430:9dff:fee2:28e1%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:16:58.931031 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: unable to create socket on eth0 (5) for fe80::430:9dff:fee2:28e1%2#123 Feb 13 15:16:58.931031 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: failed to init interface for address fe80::430:9dff:fee2:28e1%2 Feb 13 15:16:58.931031 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: Listening on routing socket on fd #21 for interface updates Feb 13 15:16:58.919190 ntpd[1909]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:16:58.919478 ntpd[1909]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:16:58.943480 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:16:58.943480 ntpd[1909]: 13 Feb 15:16:58 ntpd[1909]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:16:58.919543 ntpd[1909]: Listen normally on 3 eth0 172.31.28.87:123 Feb 13 15:16:58.919627 ntpd[1909]: Listen normally on 4 lo [::1]:123 Feb 13 15:16:58.919730 ntpd[1909]: bind(21) AF_INET6 fe80::430:9dff:fee2:28e1%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:16:58.919770 ntpd[1909]: unable to create socket on eth0 (5) for fe80::430:9dff:fee2:28e1%2#123 Feb 13 15:16:58.919797 ntpd[1909]: failed to init interface for address fe80::430:9dff:fee2:28e1%2 Feb 13 15:16:58.919855 ntpd[1909]: Listening on routing socket on fd #21 for interface updates Feb 13 15:16:58.939231 ntpd[1909]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:16:58.939301 ntpd[1909]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:16:58.944635 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:16:58.945859 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:16:58.973776 (ntainerd)[1944]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:16:59.007031 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:16:59.020732 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 15:16:59.020813 jq[1950]: true Feb 13 15:16:59.026464 tar[1928]: linux-arm64/helm Feb 13 15:16:59.044134 update_engine[1918]: I20250213 15:16:59.027220 1918 main.cc:92] Flatcar Update Engine starting Feb 13 15:16:59.044134 update_engine[1918]: I20250213 15:16:59.041003 1918 update_check_scheduler.cc:74] Next update check in 2m28s Feb 13 15:16:59.041342 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:16:59.051398 extend-filesystems[1942]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 15:16:59.051398 extend-filesystems[1942]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:16:59.051398 extend-filesystems[1942]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 15:16:59.063826 extend-filesystems[1907]: Resized filesystem in /dev/nvme0n1p9 Feb 13 15:16:59.060828 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:16:59.066550 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:16:59.067852 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:16:59.132019 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 15:16:59.148813 coreos-metadata[1904]: Feb 13 15:16:59.148 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:16:59.150877 coreos-metadata[1904]: Feb 13 15:16:59.150 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 15:16:59.152036 coreos-metadata[1904]: Feb 13 15:16:59.151 INFO Fetch successful Feb 13 15:16:59.161732 coreos-metadata[1904]: Feb 13 15:16:59.159 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 15:16:59.163037 coreos-metadata[1904]: Feb 13 15:16:59.162 INFO Fetch successful Feb 13 15:16:59.163134 coreos-metadata[1904]: Feb 13 15:16:59.163 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 15:16:59.164976 coreos-metadata[1904]: Feb 13 15:16:59.164 INFO Fetch successful Feb 13 15:16:59.166729 coreos-metadata[1904]: Feb 13 15:16:59.165 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 15:16:59.171273 coreos-metadata[1904]: Feb 13 15:16:59.170 INFO Fetch successful Feb 13 15:16:59.171273 coreos-metadata[1904]: Feb 13 15:16:59.170 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 15:16:59.172200 coreos-metadata[1904]: Feb 13 15:16:59.172 INFO Fetch failed with 404: resource not found Feb 13 15:16:59.172200 coreos-metadata[1904]: Feb 13 15:16:59.172 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 15:16:59.176813 coreos-metadata[1904]: Feb 13 15:16:59.176 INFO Fetch successful Feb 13 15:16:59.176923 coreos-metadata[1904]: Feb 13 15:16:59.176 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 15:16:59.183146 coreos-metadata[1904]: Feb 13 15:16:59.183 INFO Fetch successful Feb 13 15:16:59.183146 coreos-metadata[1904]: Feb 13 15:16:59.183 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 15:16:59.187220 coreos-metadata[1904]: Feb 13 15:16:59.187 INFO Fetch successful Feb 13 15:16:59.188064 coreos-metadata[1904]: Feb 13 15:16:59.187 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 15:16:59.190120 systemd-logind[1917]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:16:59.190161 systemd-logind[1917]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 15:16:59.192320 systemd-logind[1917]: New seat seat0. Feb 13 15:16:59.192978 coreos-metadata[1904]: Feb 13 15:16:59.192 INFO Fetch successful Feb 13 15:16:59.193341 coreos-metadata[1904]: Feb 13 15:16:59.192 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 15:16:59.196003 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:16:59.198518 coreos-metadata[1904]: Feb 13 15:16:59.198 INFO Fetch successful Feb 13 15:16:59.334764 bash[1995]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:16:59.339381 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:16:59.344742 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1676) Feb 13 15:16:59.350180 systemd[1]: Starting sshkeys.service... Feb 13 15:16:59.354622 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:16:59.358586 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:16:59.402849 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:16:59.409222 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:16:59.677467 coreos-metadata[2032]: Feb 13 15:16:59.670 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:16:59.677467 coreos-metadata[2032]: Feb 13 15:16:59.674 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 15:16:59.677467 coreos-metadata[2032]: Feb 13 15:16:59.675 INFO Fetch successful Feb 13 15:16:59.677467 coreos-metadata[2032]: Feb 13 15:16:59.675 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 15:16:59.681643 coreos-metadata[2032]: Feb 13 15:16:59.678 INFO Fetch successful Feb 13 15:16:59.685860 unknown[2032]: wrote ssh authorized keys file for user: core Feb 13 15:16:59.784967 update-ssh-keys[2069]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:16:59.788235 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:16:59.797815 systemd[1]: Finished sshkeys.service. Feb 13 15:16:59.816363 containerd[1944]: time="2025-02-13T15:16:59.816222564Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:16:59.829611 dbus-daemon[1905]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 15:16:59.833374 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 15:16:59.840529 dbus-daemon[1905]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1946 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 15:16:59.851295 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 15:16:59.877239 ntpd[1909]: bind(24) AF_INET6 fe80::430:9dff:fee2:28e1%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:16:59.877304 ntpd[1909]: unable to create socket on eth0 (6) for fe80::430:9dff:fee2:28e1%2#123 Feb 13 15:16:59.884134 ntpd[1909]: 13 Feb 15:16:59 ntpd[1909]: bind(24) AF_INET6 fe80::430:9dff:fee2:28e1%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:16:59.884134 ntpd[1909]: 13 Feb 15:16:59 ntpd[1909]: unable to create socket on eth0 (6) for fe80::430:9dff:fee2:28e1%2#123 Feb 13 15:16:59.884134 ntpd[1909]: 13 Feb 15:16:59 ntpd[1909]: failed to init interface for address fe80::430:9dff:fee2:28e1%2 Feb 13 15:16:59.877333 ntpd[1909]: failed to init interface for address fe80::430:9dff:fee2:28e1%2 Feb 13 15:16:59.909902 locksmithd[1960]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:16:59.919919 polkitd[2088]: Started polkitd version 121 Feb 13 15:16:59.961478 polkitd[2088]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 15:16:59.961611 polkitd[2088]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 15:16:59.967435 polkitd[2088]: Finished loading, compiling and executing 2 rules Feb 13 15:16:59.981972 dbus-daemon[1905]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 15:16:59.982286 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 15:16:59.989392 polkitd[2088]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 15:17:00.035341 systemd-hostnamed[1946]: Hostname set to (transient) Feb 13 15:17:00.035342 systemd-resolved[1853]: System hostname changed to 'ip-172-31-28-87'. Feb 13 15:17:00.059995 containerd[1944]: time="2025-02-13T15:17:00.059910465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:00.065315 containerd[1944]: time="2025-02-13T15:17:00.065233377Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:17:00.065315 containerd[1944]: time="2025-02-13T15:17:00.065303697Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:17:00.065472 containerd[1944]: time="2025-02-13T15:17:00.065340741Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:17:00.065716 containerd[1944]: time="2025-02-13T15:17:00.065657433Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:17:00.065798 containerd[1944]: time="2025-02-13T15:17:00.065734053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:00.065911 containerd[1944]: time="2025-02-13T15:17:00.065869653Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:17:00.065965 containerd[1944]: time="2025-02-13T15:17:00.065909601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:00.066256 containerd[1944]: time="2025-02-13T15:17:00.066210909Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:17:00.066336 containerd[1944]: time="2025-02-13T15:17:00.066251469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:00.066336 containerd[1944]: time="2025-02-13T15:17:00.066285669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:17:00.066336 containerd[1944]: time="2025-02-13T15:17:00.066309597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:00.067737 containerd[1944]: time="2025-02-13T15:17:00.066467313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:00.071026 containerd[1944]: time="2025-02-13T15:17:00.070954845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:00.071250 containerd[1944]: time="2025-02-13T15:17:00.071205165Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:17:00.071307 containerd[1944]: time="2025-02-13T15:17:00.071246781Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:17:00.071473 containerd[1944]: time="2025-02-13T15:17:00.071433849Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:17:00.071583 containerd[1944]: time="2025-02-13T15:17:00.071545377Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:17:00.078560 containerd[1944]: time="2025-02-13T15:17:00.078455589Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:17:00.078560 containerd[1944]: time="2025-02-13T15:17:00.078552033Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:17:00.079084 containerd[1944]: time="2025-02-13T15:17:00.078597105Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:17:00.079084 containerd[1944]: time="2025-02-13T15:17:00.078633345Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:17:00.079084 containerd[1944]: time="2025-02-13T15:17:00.078669201Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:17:00.079084 containerd[1944]: time="2025-02-13T15:17:00.078947097Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:17:00.079884 containerd[1944]: time="2025-02-13T15:17:00.079398225Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:17:00.079884 containerd[1944]: time="2025-02-13T15:17:00.079624857Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:17:00.079884 containerd[1944]: time="2025-02-13T15:17:00.079663557Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:17:00.079884 containerd[1944]: time="2025-02-13T15:17:00.079726329Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:17:00.079884 containerd[1944]: time="2025-02-13T15:17:00.079763229Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:17:00.079884 containerd[1944]: time="2025-02-13T15:17:00.079793421Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:17:00.079884 containerd[1944]: time="2025-02-13T15:17:00.079823409Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:17:00.079884 containerd[1944]: time="2025-02-13T15:17:00.079854141Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:17:00.081145 containerd[1944]: time="2025-02-13T15:17:00.079886709Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:17:00.081145 containerd[1944]: time="2025-02-13T15:17:00.079915929Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:17:00.081145 containerd[1944]: time="2025-02-13T15:17:00.079947645Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:17:00.081145 containerd[1944]: time="2025-02-13T15:17:00.079975089Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:17:00.081145 containerd[1944]: time="2025-02-13T15:17:00.080015421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.081145 containerd[1944]: time="2025-02-13T15:17:00.080045913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.081145 containerd[1944]: time="2025-02-13T15:17:00.080075445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.081145 containerd[1944]: time="2025-02-13T15:17:00.080106333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.081145 containerd[1944]: time="2025-02-13T15:17:00.080137281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.081145 containerd[1944]: time="2025-02-13T15:17:00.080169165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.081145 containerd[1944]: time="2025-02-13T15:17:00.080197209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.081145 containerd[1944]: time="2025-02-13T15:17:00.080238405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.081145 containerd[1944]: time="2025-02-13T15:17:00.080269341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.081145 containerd[1944]: time="2025-02-13T15:17:00.080301297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.082272 containerd[1944]: time="2025-02-13T15:17:00.080334285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.082272 containerd[1944]: time="2025-02-13T15:17:00.080373141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.082272 containerd[1944]: time="2025-02-13T15:17:00.080403165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.082272 containerd[1944]: time="2025-02-13T15:17:00.080437821Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:17:00.082272 containerd[1944]: time="2025-02-13T15:17:00.080481201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.082272 containerd[1944]: time="2025-02-13T15:17:00.080512785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.082272 containerd[1944]: time="2025-02-13T15:17:00.080538933Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:17:00.085751 containerd[1944]: time="2025-02-13T15:17:00.080682885Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:17:00.085993 containerd[1944]: time="2025-02-13T15:17:00.085781025Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:17:00.085993 containerd[1944]: time="2025-02-13T15:17:00.085823841Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:17:00.085993 containerd[1944]: time="2025-02-13T15:17:00.085853853Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:17:00.085993 containerd[1944]: time="2025-02-13T15:17:00.085877757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.085993 containerd[1944]: time="2025-02-13T15:17:00.085911201Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:17:00.085993 containerd[1944]: time="2025-02-13T15:17:00.085934997Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:17:00.085993 containerd[1944]: time="2025-02-13T15:17:00.085959969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.087200 containerd[1944]: time="2025-02-13T15:17:00.086512989Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:17:00.087200 containerd[1944]: time="2025-02-13T15:17:00.086621169Z" level=info msg="Connect containerd service" Feb 13 15:17:00.087200 containerd[1944]: time="2025-02-13T15:17:00.086730597Z" level=info msg="using legacy CRI server" Feb 13 15:17:00.087200 containerd[1944]: time="2025-02-13T15:17:00.086751933Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:17:00.087200 containerd[1944]: time="2025-02-13T15:17:00.087008973Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:17:00.091408 containerd[1944]: time="2025-02-13T15:17:00.090366681Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:17:00.091408 containerd[1944]: time="2025-02-13T15:17:00.090844113Z" level=info msg="Start subscribing containerd event" Feb 13 15:17:00.091408 containerd[1944]: time="2025-02-13T15:17:00.090925869Z" level=info msg="Start recovering state" Feb 13 15:17:00.091408 containerd[1944]: time="2025-02-13T15:17:00.091005741Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:17:00.092241 containerd[1944]: time="2025-02-13T15:17:00.091685085Z" level=info msg="Start event monitor" Feb 13 15:17:00.092241 containerd[1944]: time="2025-02-13T15:17:00.091738533Z" level=info msg="Start snapshots syncer" Feb 13 15:17:00.092241 containerd[1944]: time="2025-02-13T15:17:00.091762125Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:17:00.092241 containerd[1944]: time="2025-02-13T15:17:00.091784949Z" level=info msg="Start streaming server" Feb 13 15:17:00.094990 containerd[1944]: time="2025-02-13T15:17:00.094784745Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:17:00.095043 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:17:00.099742 containerd[1944]: time="2025-02-13T15:17:00.098316069Z" level=info msg="containerd successfully booted in 0.287785s" Feb 13 15:17:00.145935 systemd-networkd[1849]: eth0: Gained IPv6LL Feb 13 15:17:00.156977 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:17:00.160353 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:17:00.175399 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 15:17:00.188092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:00.194508 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:17:00.322807 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:17:00.334980 amazon-ssm-agent[2110]: Initializing new seelog logger Feb 13 15:17:00.337760 amazon-ssm-agent[2110]: New Seelog Logger Creation Complete Feb 13 15:17:00.337760 amazon-ssm-agent[2110]: 2025/02/13 15:17:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:17:00.337760 amazon-ssm-agent[2110]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:17:00.337760 amazon-ssm-agent[2110]: 2025/02/13 15:17:00 processing appconfig overrides Feb 13 15:17:00.339542 amazon-ssm-agent[2110]: 2025/02/13 15:17:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:17:00.339743 amazon-ssm-agent[2110]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:17:00.339941 amazon-ssm-agent[2110]: 2025/02/13 15:17:00 processing appconfig overrides Feb 13 15:17:00.340524 amazon-ssm-agent[2110]: 2025/02/13 15:17:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:17:00.340621 amazon-ssm-agent[2110]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:17:00.341747 amazon-ssm-agent[2110]: 2025/02/13 15:17:00 processing appconfig overrides Feb 13 15:17:00.341747 amazon-ssm-agent[2110]: 2025-02-13 15:17:00 INFO Proxy environment variables: Feb 13 15:17:00.347447 amazon-ssm-agent[2110]: 2025/02/13 15:17:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:17:00.347447 amazon-ssm-agent[2110]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:17:00.347650 amazon-ssm-agent[2110]: 2025/02/13 15:17:00 processing appconfig overrides Feb 13 15:17:00.441975 amazon-ssm-agent[2110]: 2025-02-13 15:17:00 INFO https_proxy: Feb 13 15:17:00.548284 amazon-ssm-agent[2110]: 2025-02-13 15:17:00 INFO http_proxy: Feb 13 15:17:00.645166 amazon-ssm-agent[2110]: 2025-02-13 15:17:00 INFO no_proxy: Feb 13 15:17:00.743216 amazon-ssm-agent[2110]: 2025-02-13 15:17:00 INFO Checking if agent identity type OnPrem can be assumed Feb 13 15:17:00.842733 amazon-ssm-agent[2110]: 2025-02-13 15:17:00 INFO Checking if agent identity type EC2 can be assumed Feb 13 15:17:00.871887 tar[1928]: linux-arm64/LICENSE Feb 13 15:17:00.872594 tar[1928]: linux-arm64/README.md Feb 13 15:17:00.913802 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:17:00.940786 amazon-ssm-agent[2110]: 2025-02-13 15:17:00 INFO Agent will take identity from EC2 Feb 13 15:17:01.040650 amazon-ssm-agent[2110]: 2025-02-13 15:17:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:17:01.141523 amazon-ssm-agent[2110]: 2025-02-13 15:17:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:17:01.242778 amazon-ssm-agent[2110]: 2025-02-13 15:17:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:17:01.341723 amazon-ssm-agent[2110]: 2025-02-13 15:17:00 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 15:17:01.438243 sshd_keygen[1940]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:17:01.441231 amazon-ssm-agent[2110]: 2025-02-13 15:17:00 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 15:17:01.484013 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:17:01.498264 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:17:01.504073 systemd[1]: Started sshd@0-172.31.28.87:22-139.178.68.195:34078.service - OpenSSH per-connection server daemon (139.178.68.195:34078). Feb 13 15:17:01.520947 amazon-ssm-agent[2110]: 2025-02-13 15:17:00 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 15:17:01.520947 amazon-ssm-agent[2110]: 2025-02-13 15:17:00 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 15:17:01.520947 amazon-ssm-agent[2110]: 2025-02-13 15:17:00 INFO [Registrar] Starting registrar module Feb 13 15:17:01.520947 amazon-ssm-agent[2110]: 2025-02-13 15:17:00 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 15:17:01.520947 amazon-ssm-agent[2110]: 2025-02-13 15:17:01 INFO [EC2Identity] EC2 registration was successful. Feb 13 15:17:01.522014 amazon-ssm-agent[2110]: 2025-02-13 15:17:01 INFO [CredentialRefresher] credentialRefresher has started Feb 13 15:17:01.522014 amazon-ssm-agent[2110]: 2025-02-13 15:17:01 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 15:17:01.522014 amazon-ssm-agent[2110]: 2025-02-13 15:17:01 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 15:17:01.530554 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:17:01.531086 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:17:01.540886 amazon-ssm-agent[2110]: 2025-02-13 15:17:01 INFO [CredentialRefresher] Next credential rotation will be in 32.28332306433333 minutes Feb 13 15:17:01.545274 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:17:01.591667 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:17:01.603319 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:17:01.614302 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:17:01.617870 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:17:01.737881 sshd[2140]: Accepted publickey for core from 139.178.68.195 port 34078 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:17:01.741991 sshd-session[2140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:01.759763 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:17:01.770235 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:17:01.780888 systemd-logind[1917]: New session 1 of user core. Feb 13 15:17:01.798767 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:17:01.812281 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:17:01.830194 (systemd)[2151]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:17:02.044145 systemd[2151]: Queued start job for default target default.target. Feb 13 15:17:02.053612 systemd[2151]: Created slice app.slice - User Application Slice. Feb 13 15:17:02.053676 systemd[2151]: Reached target paths.target - Paths. Feb 13 15:17:02.053741 systemd[2151]: Reached target timers.target - Timers. Feb 13 15:17:02.056204 systemd[2151]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:17:02.088774 systemd[2151]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:17:02.089014 systemd[2151]: Reached target sockets.target - Sockets. Feb 13 15:17:02.089047 systemd[2151]: Reached target basic.target - Basic System. Feb 13 15:17:02.089129 systemd[2151]: Reached target default.target - Main User Target. Feb 13 15:17:02.089194 systemd[2151]: Startup finished in 247ms. Feb 13 15:17:02.089505 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:17:02.104019 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:17:02.268627 systemd[1]: Started sshd@1-172.31.28.87:22-139.178.68.195:34082.service - OpenSSH per-connection server daemon (139.178.68.195:34082). Feb 13 15:17:02.397548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:02.401051 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:17:02.403499 systemd[1]: Startup finished in 1.083s (kernel) + 8.880s (initrd) + 8.773s (userspace) = 18.736s. Feb 13 15:17:02.415677 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:02.475536 sshd[2162]: Accepted publickey for core from 139.178.68.195 port 34082 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:17:02.478271 sshd-session[2162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:02.487096 systemd-logind[1917]: New session 2 of user core. Feb 13 15:17:02.490990 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:17:02.549312 amazon-ssm-agent[2110]: 2025-02-13 15:17:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 15:17:02.622477 sshd[2174]: Connection closed by 139.178.68.195 port 34082 Feb 13 15:17:02.623413 sshd-session[2162]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:02.632322 systemd[1]: sshd@1-172.31.28.87:22-139.178.68.195:34082.service: Deactivated successfully. Feb 13 15:17:02.639096 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:17:02.645959 systemd-logind[1917]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:17:02.650189 amazon-ssm-agent[2110]: 2025-02-13 15:17:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2176) started Feb 13 15:17:02.666874 systemd[1]: Started sshd@2-172.31.28.87:22-139.178.68.195:34094.service - OpenSSH per-connection server daemon (139.178.68.195:34094). Feb 13 15:17:02.669910 systemd-logind[1917]: Removed session 2. Feb 13 15:17:02.750691 amazon-ssm-agent[2110]: 2025-02-13 15:17:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 15:17:02.866114 sshd[2185]: Accepted publickey for core from 139.178.68.195 port 34094 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:17:02.870753 sshd-session[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:02.876354 ntpd[1909]: Listen normally on 7 eth0 [fe80::430:9dff:fee2:28e1%2]:123 Feb 13 15:17:02.877147 ntpd[1909]: 13 Feb 15:17:02 ntpd[1909]: Listen normally on 7 eth0 [fe80::430:9dff:fee2:28e1%2]:123 Feb 13 15:17:02.881063 systemd-logind[1917]: New session 3 of user core. Feb 13 15:17:02.892963 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:17:03.011760 sshd[2196]: Connection closed by 139.178.68.195 port 34094 Feb 13 15:17:03.013098 sshd-session[2185]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:03.018482 systemd[1]: sshd@2-172.31.28.87:22-139.178.68.195:34094.service: Deactivated successfully. Feb 13 15:17:03.022334 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:17:03.028979 systemd-logind[1917]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:17:03.030625 systemd-logind[1917]: Removed session 3. Feb 13 15:17:03.056247 systemd[1]: Started sshd@3-172.31.28.87:22-139.178.68.195:34110.service - OpenSSH per-connection server daemon (139.178.68.195:34110). Feb 13 15:17:03.248782 sshd[2201]: Accepted publickey for core from 139.178.68.195 port 34110 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:17:03.252294 sshd-session[2201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:03.259299 systemd-logind[1917]: New session 4 of user core. Feb 13 15:17:03.269999 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:17:03.400441 sshd[2203]: Connection closed by 139.178.68.195 port 34110 Feb 13 15:17:03.401281 sshd-session[2201]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:03.409289 systemd[1]: sshd@3-172.31.28.87:22-139.178.68.195:34110.service: Deactivated successfully. Feb 13 15:17:03.415316 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:17:03.417338 systemd-logind[1917]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:17:03.420893 systemd-logind[1917]: Removed session 4. Feb 13 15:17:03.452647 systemd[1]: Started sshd@4-172.31.28.87:22-139.178.68.195:34124.service - OpenSSH per-connection server daemon (139.178.68.195:34124). Feb 13 15:17:03.641611 sshd[2209]: Accepted publickey for core from 139.178.68.195 port 34124 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:17:03.644886 sshd-session[2209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:03.653464 systemd-logind[1917]: New session 5 of user core. Feb 13 15:17:03.661018 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:17:03.769430 kubelet[2169]: E0213 15:17:03.769335 2169 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:03.773539 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:03.774004 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:03.775619 systemd[1]: kubelet.service: Consumed 1.317s CPU time. Feb 13 15:17:03.817225 sudo[2213]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:17:03.817862 sudo[2213]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:17:03.834346 sudo[2213]: pam_unix(sudo:session): session closed for user root Feb 13 15:17:03.857740 sshd[2212]: Connection closed by 139.178.68.195 port 34124 Feb 13 15:17:03.858933 sshd-session[2209]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:03.866181 systemd[1]: sshd@4-172.31.28.87:22-139.178.68.195:34124.service: Deactivated successfully. Feb 13 15:17:03.869244 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:17:03.870589 systemd-logind[1917]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:17:03.872910 systemd-logind[1917]: Removed session 5. Feb 13 15:17:03.898213 systemd[1]: Started sshd@5-172.31.28.87:22-139.178.68.195:34138.service - OpenSSH per-connection server daemon (139.178.68.195:34138). Feb 13 15:17:04.092741 sshd[2219]: Accepted publickey for core from 139.178.68.195 port 34138 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:17:04.095386 sshd-session[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:04.103294 systemd-logind[1917]: New session 6 of user core. Feb 13 15:17:04.111943 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:17:04.217669 sudo[2223]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:17:04.219077 sudo[2223]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:17:04.225808 sudo[2223]: pam_unix(sudo:session): session closed for user root Feb 13 15:17:04.235836 sudo[2222]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:17:04.236464 sudo[2222]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:17:04.263242 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:17:04.308677 augenrules[2245]: No rules Feb 13 15:17:04.310929 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:17:04.311297 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:17:04.314606 sudo[2222]: pam_unix(sudo:session): session closed for user root Feb 13 15:17:04.338124 sshd[2221]: Connection closed by 139.178.68.195 port 34138 Feb 13 15:17:04.338902 sshd-session[2219]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:04.344630 systemd-logind[1917]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:17:04.346054 systemd[1]: sshd@5-172.31.28.87:22-139.178.68.195:34138.service: Deactivated successfully. Feb 13 15:17:04.349056 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:17:04.352355 systemd-logind[1917]: Removed session 6. Feb 13 15:17:04.374396 systemd[1]: Started sshd@6-172.31.28.87:22-139.178.68.195:34146.service - OpenSSH per-connection server daemon (139.178.68.195:34146). Feb 13 15:17:04.564558 sshd[2253]: Accepted publickey for core from 139.178.68.195 port 34146 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:17:04.567574 sshd-session[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:04.576060 systemd-logind[1917]: New session 7 of user core. Feb 13 15:17:04.587983 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:17:04.692226 sudo[2256]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:17:04.693328 sudo[2256]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:17:05.399221 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:17:05.412194 (dockerd)[2274]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:17:05.869787 dockerd[2274]: time="2025-02-13T15:17:05.868651410Z" level=info msg="Starting up" Feb 13 15:17:06.249979 systemd-resolved[1853]: Clock change detected. Flushing caches. Feb 13 15:17:06.537066 dockerd[2274]: time="2025-02-13T15:17:06.536527369Z" level=info msg="Loading containers: start." Feb 13 15:17:06.817166 kernel: Initializing XFRM netlink socket Feb 13 15:17:06.855842 (udev-worker)[2300]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:17:06.958091 systemd-networkd[1849]: docker0: Link UP Feb 13 15:17:06.997522 dockerd[2274]: time="2025-02-13T15:17:06.997448271Z" level=info msg="Loading containers: done." Feb 13 15:17:07.021439 dockerd[2274]: time="2025-02-13T15:17:07.021359135Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:17:07.021653 dockerd[2274]: time="2025-02-13T15:17:07.021503747Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:17:07.021712 dockerd[2274]: time="2025-02-13T15:17:07.021688463Z" level=info msg="Daemon has completed initialization" Feb 13 15:17:07.078192 dockerd[2274]: time="2025-02-13T15:17:07.077970156Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:17:07.078342 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:17:08.402759 containerd[1944]: time="2025-02-13T15:17:08.402064310Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 15:17:09.007751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1531978327.mount: Deactivated successfully. Feb 13 15:17:10.516359 containerd[1944]: time="2025-02-13T15:17:10.516280061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:10.518399 containerd[1944]: time="2025-02-13T15:17:10.518314181Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865207" Feb 13 15:17:10.520591 containerd[1944]: time="2025-02-13T15:17:10.520494509Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:10.526054 containerd[1944]: time="2025-02-13T15:17:10.525957425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:10.528571 containerd[1944]: time="2025-02-13T15:17:10.528318761Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.126126795s" Feb 13 15:17:10.528571 containerd[1944]: time="2025-02-13T15:17:10.528374441Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 15:17:10.571397 containerd[1944]: time="2025-02-13T15:17:10.571275089Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 15:17:12.806177 containerd[1944]: time="2025-02-13T15:17:12.805394684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:12.808016 containerd[1944]: time="2025-02-13T15:17:12.807933080Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898594" Feb 13 15:17:12.810058 containerd[1944]: time="2025-02-13T15:17:12.810009332Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:12.822036 containerd[1944]: time="2025-02-13T15:17:12.821955080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:12.825334 containerd[1944]: time="2025-02-13T15:17:12.825269792Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 2.253663059s" Feb 13 15:17:12.825428 containerd[1944]: time="2025-02-13T15:17:12.825328220Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 15:17:12.866014 containerd[1944]: time="2025-02-13T15:17:12.865914248Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 15:17:14.291088 containerd[1944]: time="2025-02-13T15:17:14.289100779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:14.292257 containerd[1944]: time="2025-02-13T15:17:14.292181707Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164934" Feb 13 15:17:14.295168 containerd[1944]: time="2025-02-13T15:17:14.295096939Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:14.300540 containerd[1944]: time="2025-02-13T15:17:14.300488815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:14.302988 containerd[1944]: time="2025-02-13T15:17:14.302924864Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.436951732s" Feb 13 15:17:14.302988 containerd[1944]: time="2025-02-13T15:17:14.302982992Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 15:17:14.339955 containerd[1944]: time="2025-02-13T15:17:14.339890000Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:17:14.397709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:17:14.404486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:14.727401 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:14.727840 (kubelet)[2553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:14.808178 kubelet[2553]: E0213 15:17:14.807899 2553 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:14.818631 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:14.818994 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:15.673737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount954163471.mount: Deactivated successfully. Feb 13 15:17:16.177005 containerd[1944]: time="2025-02-13T15:17:16.176917077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:16.178471 containerd[1944]: time="2025-02-13T15:17:16.178400553Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663370" Feb 13 15:17:16.180073 containerd[1944]: time="2025-02-13T15:17:16.179973789Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:16.184572 containerd[1944]: time="2025-02-13T15:17:16.184496205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:16.186015 containerd[1944]: time="2025-02-13T15:17:16.185804277Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.845834141s" Feb 13 15:17:16.186015 containerd[1944]: time="2025-02-13T15:17:16.185861709Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 15:17:16.225149 containerd[1944]: time="2025-02-13T15:17:16.225031905Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:17:16.760707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2942075097.mount: Deactivated successfully. Feb 13 15:17:17.845140 containerd[1944]: time="2025-02-13T15:17:17.844278373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:17.848192 containerd[1944]: time="2025-02-13T15:17:17.847675837Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 15:17:17.851149 containerd[1944]: time="2025-02-13T15:17:17.849562633Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:17.869700 containerd[1944]: time="2025-02-13T15:17:17.869634901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:17.874227 containerd[1944]: time="2025-02-13T15:17:17.874171441Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.64907788s" Feb 13 15:17:17.876189 containerd[1944]: time="2025-02-13T15:17:17.876154021Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:17:17.919843 containerd[1944]: time="2025-02-13T15:17:17.919789513Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:17:18.373801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3488687336.mount: Deactivated successfully. Feb 13 15:17:18.381717 containerd[1944]: time="2025-02-13T15:17:18.381389616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:18.383008 containerd[1944]: time="2025-02-13T15:17:18.382957536Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Feb 13 15:17:18.384150 containerd[1944]: time="2025-02-13T15:17:18.384044844Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:18.388463 containerd[1944]: time="2025-02-13T15:17:18.388371120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:18.390156 containerd[1944]: time="2025-02-13T15:17:18.389944632Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 469.919871ms" Feb 13 15:17:18.390156 containerd[1944]: time="2025-02-13T15:17:18.389996112Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:17:18.431397 containerd[1944]: time="2025-02-13T15:17:18.431322132Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 15:17:18.935006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4124298819.mount: Deactivated successfully. Feb 13 15:17:22.011551 containerd[1944]: time="2025-02-13T15:17:22.011451830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:22.013410 containerd[1944]: time="2025-02-13T15:17:22.013311494Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Feb 13 15:17:22.014591 containerd[1944]: time="2025-02-13T15:17:22.014502266Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:22.020451 containerd[1944]: time="2025-02-13T15:17:22.020369090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:22.025153 containerd[1944]: time="2025-02-13T15:17:22.022943030Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.591563742s" Feb 13 15:17:22.025153 containerd[1944]: time="2025-02-13T15:17:22.023005730Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 15:17:25.069372 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:17:25.077526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:25.383613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:25.388354 (kubelet)[2741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:25.471059 kubelet[2741]: E0213 15:17:25.471001 2741 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:25.476721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:25.477046 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:28.512335 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:28.527615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:28.572736 systemd[1]: Reloading requested from client PID 2755 ('systemctl') (unit session-7.scope)... Feb 13 15:17:28.572773 systemd[1]: Reloading... Feb 13 15:17:28.810145 zram_generator::config[2799]: No configuration found. Feb 13 15:17:29.023290 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:17:29.186574 systemd[1]: Reloading finished in 613 ms. Feb 13 15:17:29.278633 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:17:29.278863 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:17:29.279518 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:29.287693 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:29.559425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:29.569647 (kubelet)[2860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:17:29.645781 kubelet[2860]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:17:29.646290 kubelet[2860]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:17:29.646290 kubelet[2860]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:17:29.647621 kubelet[2860]: I0213 15:17:29.647544 2860 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:17:30.419320 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 15:17:31.524236 kubelet[2860]: I0213 15:17:31.523593 2860 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:17:31.524236 kubelet[2860]: I0213 15:17:31.523635 2860 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:17:31.524236 kubelet[2860]: I0213 15:17:31.523959 2860 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:17:31.550657 kubelet[2860]: E0213 15:17:31.550598 2860 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.28.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.28.87:6443: connect: connection refused Feb 13 15:17:31.551333 kubelet[2860]: I0213 15:17:31.551129 2860 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:17:31.567472 kubelet[2860]: I0213 15:17:31.567433 2860 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:17:31.568172 kubelet[2860]: I0213 15:17:31.568094 2860 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:17:31.569094 kubelet[2860]: I0213 15:17:31.568275 2860 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-87","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:17:31.569094 kubelet[2860]: I0213 15:17:31.568579 2860 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:17:31.569094 kubelet[2860]: I0213 15:17:31.568598 2860 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:17:31.569094 kubelet[2860]: I0213 15:17:31.568829 2860 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:17:31.570370 kubelet[2860]: I0213 15:17:31.570343 2860 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:17:31.570502 kubelet[2860]: I0213 15:17:31.570482 2860 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:17:31.570724 kubelet[2860]: I0213 15:17:31.570703 2860 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:17:31.570907 kubelet[2860]: I0213 15:17:31.570885 2860 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:17:31.572482 kubelet[2860]: W0213 15:17:31.572412 2860 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.87:6443: connect: connection refused Feb 13 15:17:31.573018 kubelet[2860]: E0213 15:17:31.572691 2860 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.28.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.87:6443: connect: connection refused Feb 13 15:17:31.573018 kubelet[2860]: W0213 15:17:31.572892 2860 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-87&limit=500&resourceVersion=0": dial tcp 172.31.28.87:6443: connect: connection refused Feb 13 15:17:31.573018 kubelet[2860]: E0213 15:17:31.572976 2860 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.28.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-87&limit=500&resourceVersion=0": dial tcp 172.31.28.87:6443: connect: connection refused Feb 13 15:17:31.573621 kubelet[2860]: I0213 15:17:31.573592 2860 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:17:31.575148 kubelet[2860]: I0213 15:17:31.574041 2860 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:17:31.575148 kubelet[2860]: W0213 15:17:31.574145 2860 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:17:31.575786 kubelet[2860]: I0213 15:17:31.575757 2860 server.go:1264] "Started kubelet" Feb 13 15:17:31.583550 kubelet[2860]: I0213 15:17:31.583510 2860 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:17:31.585453 kubelet[2860]: E0213 15:17:31.585197 2860 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.87:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.87:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-87.1823cd84da397fdd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-87,UID:ip-172-31-28-87,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-87,},FirstTimestamp:2025-02-13 15:17:31.575721949 +0000 UTC m=+1.999068299,LastTimestamp:2025-02-13 15:17:31.575721949 +0000 UTC m=+1.999068299,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-87,}" Feb 13 15:17:31.594773 kubelet[2860]: I0213 15:17:31.594697 2860 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:17:31.595636 kubelet[2860]: I0213 15:17:31.595587 2860 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:17:31.596936 kubelet[2860]: I0213 15:17:31.596898 2860 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:17:31.598775 kubelet[2860]: I0213 15:17:31.598698 2860 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:17:31.599288 kubelet[2860]: I0213 15:17:31.599259 2860 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:17:31.600460 kubelet[2860]: E0213 15:17:31.600378 2860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-87?timeout=10s\": dial tcp 172.31.28.87:6443: connect: connection refused" interval="200ms" Feb 13 15:17:31.602781 kubelet[2860]: I0213 15:17:31.600789 2860 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:17:31.602781 kubelet[2860]: I0213 15:17:31.600934 2860 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:17:31.602781 kubelet[2860]: I0213 15:17:31.601002 2860 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:17:31.602781 kubelet[2860]: I0213 15:17:31.601159 2860 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:17:31.602781 kubelet[2860]: W0213 15:17:31.602631 2860 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.87:6443: connect: connection refused Feb 13 15:17:31.602781 kubelet[2860]: E0213 15:17:31.602718 2860 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.28.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.87:6443: connect: connection refused Feb 13 15:17:31.606583 kubelet[2860]: I0213 15:17:31.606059 2860 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:17:31.621012 kubelet[2860]: I0213 15:17:31.620960 2860 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:17:31.623256 kubelet[2860]: I0213 15:17:31.623217 2860 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:17:31.623465 kubelet[2860]: I0213 15:17:31.623446 2860 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:17:31.623580 kubelet[2860]: I0213 15:17:31.623561 2860 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:17:31.623776 kubelet[2860]: E0213 15:17:31.623745 2860 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:17:31.634749 kubelet[2860]: E0213 15:17:31.634709 2860 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:17:31.635349 kubelet[2860]: W0213 15:17:31.635282 2860 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.87:6443: connect: connection refused Feb 13 15:17:31.637921 kubelet[2860]: E0213 15:17:31.637828 2860 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.28.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.87:6443: connect: connection refused Feb 13 15:17:31.652485 kubelet[2860]: I0213 15:17:31.652417 2860 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:17:31.652485 kubelet[2860]: I0213 15:17:31.652456 2860 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:17:31.652485 kubelet[2860]: I0213 15:17:31.652491 2860 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:17:31.655491 kubelet[2860]: I0213 15:17:31.655438 2860 policy_none.go:49] "None policy: Start" Feb 13 15:17:31.656864 kubelet[2860]: I0213 15:17:31.656723 2860 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:17:31.656864 kubelet[2860]: I0213 15:17:31.656765 2860 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:17:31.666593 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:17:31.681188 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:17:31.688241 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:17:31.698667 kubelet[2860]: I0213 15:17:31.698020 2860 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:17:31.698667 kubelet[2860]: I0213 15:17:31.698359 2860 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:17:31.698667 kubelet[2860]: I0213 15:17:31.698553 2860 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:17:31.699604 kubelet[2860]: I0213 15:17:31.699307 2860 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-87" Feb 13 15:17:31.700713 kubelet[2860]: E0213 15:17:31.699795 2860 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.87:6443/api/v1/nodes\": dial tcp 172.31.28.87:6443: connect: connection refused" node="ip-172-31-28-87" Feb 13 15:17:31.703508 kubelet[2860]: E0213 15:17:31.703451 2860 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-87\" not found" Feb 13 15:17:31.724336 kubelet[2860]: I0213 15:17:31.724263 2860 topology_manager.go:215] "Topology Admit Handler" podUID="ff79ece3ec5d80ee16d3a9c4c01f1a8c" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-28-87" Feb 13 15:17:31.726952 kubelet[2860]: I0213 15:17:31.726820 2860 topology_manager.go:215] "Topology Admit Handler" podUID="6ba55ec4881437a6c544e921ab28a4e8" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-28-87" Feb 13 15:17:31.729670 kubelet[2860]: I0213 15:17:31.728829 2860 topology_manager.go:215] "Topology Admit Handler" podUID="ba65078f5c4522b15f98b3a119e4cdd4" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-28-87" Feb 13 15:17:31.742250 systemd[1]: Created slice kubepods-burstable-podff79ece3ec5d80ee16d3a9c4c01f1a8c.slice - libcontainer container kubepods-burstable-podff79ece3ec5d80ee16d3a9c4c01f1a8c.slice. Feb 13 15:17:31.763909 systemd[1]: Created slice kubepods-burstable-pod6ba55ec4881437a6c544e921ab28a4e8.slice - libcontainer container kubepods-burstable-pod6ba55ec4881437a6c544e921ab28a4e8.slice. Feb 13 15:17:31.779212 systemd[1]: Created slice kubepods-burstable-podba65078f5c4522b15f98b3a119e4cdd4.slice - libcontainer container kubepods-burstable-podba65078f5c4522b15f98b3a119e4cdd4.slice. Feb 13 15:17:31.802164 kubelet[2860]: E0213 15:17:31.802063 2860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-87?timeout=10s\": dial tcp 172.31.28.87:6443: connect: connection refused" interval="400ms" Feb 13 15:17:31.902050 kubelet[2860]: I0213 15:17:31.901891 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6ba55ec4881437a6c544e921ab28a4e8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-87\" (UID: \"6ba55ec4881437a6c544e921ab28a4e8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-87" Feb 13 15:17:31.902050 kubelet[2860]: I0213 15:17:31.901953 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ba55ec4881437a6c544e921ab28a4e8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-87\" (UID: \"6ba55ec4881437a6c544e921ab28a4e8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-87" Feb 13 15:17:31.902050 kubelet[2860]: I0213 15:17:31.902000 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ba65078f5c4522b15f98b3a119e4cdd4-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-87\" (UID: \"ba65078f5c4522b15f98b3a119e4cdd4\") " pod="kube-system/kube-scheduler-ip-172-31-28-87" Feb 13 15:17:31.902050 kubelet[2860]: I0213 15:17:31.902041 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff79ece3ec5d80ee16d3a9c4c01f1a8c-ca-certs\") pod \"kube-apiserver-ip-172-31-28-87\" (UID: \"ff79ece3ec5d80ee16d3a9c4c01f1a8c\") " pod="kube-system/kube-apiserver-ip-172-31-28-87" Feb 13 15:17:31.902371 kubelet[2860]: I0213 15:17:31.902078 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ff79ece3ec5d80ee16d3a9c4c01f1a8c-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-87\" (UID: \"ff79ece3ec5d80ee16d3a9c4c01f1a8c\") " pod="kube-system/kube-apiserver-ip-172-31-28-87" Feb 13 15:17:31.902371 kubelet[2860]: I0213 15:17:31.902142 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ba55ec4881437a6c544e921ab28a4e8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-87\" (UID: \"6ba55ec4881437a6c544e921ab28a4e8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-87" Feb 13 15:17:31.902371 kubelet[2860]: I0213 15:17:31.902191 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ba55ec4881437a6c544e921ab28a4e8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-87\" (UID: \"6ba55ec4881437a6c544e921ab28a4e8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-87" Feb 13 15:17:31.902371 kubelet[2860]: I0213 15:17:31.902227 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff79ece3ec5d80ee16d3a9c4c01f1a8c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-87\" (UID: \"ff79ece3ec5d80ee16d3a9c4c01f1a8c\") " pod="kube-system/kube-apiserver-ip-172-31-28-87" Feb 13 15:17:31.902371 kubelet[2860]: I0213 15:17:31.902263 2860 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ba55ec4881437a6c544e921ab28a4e8-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-87\" (UID: \"6ba55ec4881437a6c544e921ab28a4e8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-87" Feb 13 15:17:31.902768 kubelet[2860]: I0213 15:17:31.902639 2860 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-87" Feb 13 15:17:31.903146 kubelet[2860]: E0213 15:17:31.903059 2860 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.87:6443/api/v1/nodes\": dial tcp 172.31.28.87:6443: connect: connection refused" node="ip-172-31-28-87" Feb 13 15:17:32.060924 containerd[1944]: time="2025-02-13T15:17:32.060769500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-87,Uid:ff79ece3ec5d80ee16d3a9c4c01f1a8c,Namespace:kube-system,Attempt:0,}" Feb 13 15:17:32.071162 containerd[1944]: time="2025-02-13T15:17:32.070730484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-87,Uid:6ba55ec4881437a6c544e921ab28a4e8,Namespace:kube-system,Attempt:0,}" Feb 13 15:17:32.086020 containerd[1944]: time="2025-02-13T15:17:32.085939704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-87,Uid:ba65078f5c4522b15f98b3a119e4cdd4,Namespace:kube-system,Attempt:0,}" Feb 13 15:17:32.118746 kubelet[2860]: E0213 15:17:32.118591 2860 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.87:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.87:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-87.1823cd84da397fdd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-87,UID:ip-172-31-28-87,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-87,},FirstTimestamp:2025-02-13 15:17:31.575721949 +0000 UTC m=+1.999068299,LastTimestamp:2025-02-13 15:17:31.575721949 +0000 UTC m=+1.999068299,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-87,}" Feb 13 15:17:32.203599 kubelet[2860]: E0213 15:17:32.203528 2860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-87?timeout=10s\": dial tcp 172.31.28.87:6443: connect: connection refused" interval="800ms" Feb 13 15:17:32.305891 kubelet[2860]: I0213 15:17:32.305840 2860 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-87" Feb 13 15:17:32.306472 kubelet[2860]: E0213 15:17:32.306416 2860 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.87:6443/api/v1/nodes\": dial tcp 172.31.28.87:6443: connect: connection refused" node="ip-172-31-28-87" Feb 13 15:17:32.548771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount950254105.mount: Deactivated successfully. Feb 13 15:17:32.557225 containerd[1944]: time="2025-02-13T15:17:32.556819034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:17:32.559292 containerd[1944]: time="2025-02-13T15:17:32.559235798Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:17:32.561478 containerd[1944]: time="2025-02-13T15:17:32.561412478Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 15:17:32.562518 containerd[1944]: time="2025-02-13T15:17:32.562454846Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:17:32.565018 containerd[1944]: time="2025-02-13T15:17:32.564962726Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:17:32.566641 containerd[1944]: time="2025-02-13T15:17:32.566459030Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:17:32.567595 containerd[1944]: time="2025-02-13T15:17:32.567180014Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:17:32.574930 containerd[1944]: time="2025-02-13T15:17:32.574845590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:17:32.576929 containerd[1944]: time="2025-02-13T15:17:32.576638618Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 490.572974ms" Feb 13 15:17:32.591579 containerd[1944]: time="2025-02-13T15:17:32.591519098Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 530.634794ms" Feb 13 15:17:32.592915 containerd[1944]: time="2025-02-13T15:17:32.592850378Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 522.00491ms" Feb 13 15:17:32.621999 kubelet[2860]: W0213 15:17:32.621906 2860 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.87:6443: connect: connection refused Feb 13 15:17:32.622526 kubelet[2860]: E0213 15:17:32.622008 2860 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.28.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.87:6443: connect: connection refused Feb 13 15:17:32.802599 containerd[1944]: time="2025-02-13T15:17:32.799580403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:17:32.802599 containerd[1944]: time="2025-02-13T15:17:32.799696395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:17:32.802599 containerd[1944]: time="2025-02-13T15:17:32.799732695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:32.802599 containerd[1944]: time="2025-02-13T15:17:32.799874367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:32.814921 containerd[1944]: time="2025-02-13T15:17:32.813250191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:17:32.814921 containerd[1944]: time="2025-02-13T15:17:32.813388071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:17:32.814921 containerd[1944]: time="2025-02-13T15:17:32.813419307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:32.814921 containerd[1944]: time="2025-02-13T15:17:32.813591627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:32.817591 containerd[1944]: time="2025-02-13T15:17:32.817290351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:17:32.817591 containerd[1944]: time="2025-02-13T15:17:32.817508775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:17:32.819649 containerd[1944]: time="2025-02-13T15:17:32.817555227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:32.819801 containerd[1944]: time="2025-02-13T15:17:32.819533151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:32.856625 kubelet[2860]: W0213 15:17:32.856495 2860 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.87:6443: connect: connection refused Feb 13 15:17:32.856625 kubelet[2860]: E0213 15:17:32.856588 2860 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.28.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.87:6443: connect: connection refused Feb 13 15:17:32.857444 systemd[1]: Started cri-containerd-c2c7ae445fb8b783f64daca2cc4acfd479999a507911ef977a0ba0101e35c44f.scope - libcontainer container c2c7ae445fb8b783f64daca2cc4acfd479999a507911ef977a0ba0101e35c44f. Feb 13 15:17:32.871902 systemd[1]: Started cri-containerd-6414b32b86e6723e394947926757cedcd644562d2d067934a7992c65ebdb8276.scope - libcontainer container 6414b32b86e6723e394947926757cedcd644562d2d067934a7992c65ebdb8276. Feb 13 15:17:32.877725 kubelet[2860]: W0213 15:17:32.875545 2860 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.87:6443: connect: connection refused Feb 13 15:17:32.877725 kubelet[2860]: E0213 15:17:32.877495 2860 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.28.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.87:6443: connect: connection refused Feb 13 15:17:32.888452 systemd[1]: Started cri-containerd-bee4981e0912e20c0b4e78f3c03adad45dacd483de98a34d44327d0d37c4b223.scope - libcontainer container bee4981e0912e20c0b4e78f3c03adad45dacd483de98a34d44327d0d37c4b223. Feb 13 15:17:32.988743 containerd[1944]: time="2025-02-13T15:17:32.988617088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-87,Uid:6ba55ec4881437a6c544e921ab28a4e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6414b32b86e6723e394947926757cedcd644562d2d067934a7992c65ebdb8276\"" Feb 13 15:17:33.003293 containerd[1944]: time="2025-02-13T15:17:33.002954580Z" level=info msg="CreateContainer within sandbox \"6414b32b86e6723e394947926757cedcd644562d2d067934a7992c65ebdb8276\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:17:33.006549 kubelet[2860]: E0213 15:17:33.005344 2860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-87?timeout=10s\": dial tcp 172.31.28.87:6443: connect: connection refused" interval="1.6s" Feb 13 15:17:33.007777 containerd[1944]: time="2025-02-13T15:17:33.007711968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-87,Uid:ba65078f5c4522b15f98b3a119e4cdd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"bee4981e0912e20c0b4e78f3c03adad45dacd483de98a34d44327d0d37c4b223\"" Feb 13 15:17:33.015151 containerd[1944]: time="2025-02-13T15:17:33.015071412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-87,Uid:ff79ece3ec5d80ee16d3a9c4c01f1a8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2c7ae445fb8b783f64daca2cc4acfd479999a507911ef977a0ba0101e35c44f\"" Feb 13 15:17:33.017945 containerd[1944]: time="2025-02-13T15:17:33.016079172Z" level=info msg="CreateContainer within sandbox \"bee4981e0912e20c0b4e78f3c03adad45dacd483de98a34d44327d0d37c4b223\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:17:33.021490 containerd[1944]: time="2025-02-13T15:17:33.021423576Z" level=info msg="CreateContainer within sandbox \"c2c7ae445fb8b783f64daca2cc4acfd479999a507911ef977a0ba0101e35c44f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:17:33.051823 kubelet[2860]: W0213 15:17:33.051743 2860 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-87&limit=500&resourceVersion=0": dial tcp 172.31.28.87:6443: connect: connection refused Feb 13 15:17:33.051823 kubelet[2860]: E0213 15:17:33.051839 2860 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.28.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-87&limit=500&resourceVersion=0": dial tcp 172.31.28.87:6443: connect: connection refused Feb 13 15:17:33.053035 containerd[1944]: time="2025-02-13T15:17:33.052668385Z" level=info msg="CreateContainer within sandbox \"6414b32b86e6723e394947926757cedcd644562d2d067934a7992c65ebdb8276\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f4bd480fd36f168da794113f185f6867ab8542b6f3d2da1c82cbf9b89dfeb2ff\"" Feb 13 15:17:33.053631 containerd[1944]: time="2025-02-13T15:17:33.053578429Z" level=info msg="StartContainer for \"f4bd480fd36f168da794113f185f6867ab8542b6f3d2da1c82cbf9b89dfeb2ff\"" Feb 13 15:17:33.064762 containerd[1944]: time="2025-02-13T15:17:33.064584889Z" level=info msg="CreateContainer within sandbox \"bee4981e0912e20c0b4e78f3c03adad45dacd483de98a34d44327d0d37c4b223\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"671658b527bdce86e0cab1cd4bb918add15648b5e100863db1d55db2cf918bc1\"" Feb 13 15:17:33.066373 containerd[1944]: time="2025-02-13T15:17:33.065713141Z" level=info msg="StartContainer for \"671658b527bdce86e0cab1cd4bb918add15648b5e100863db1d55db2cf918bc1\"" Feb 13 15:17:33.069004 containerd[1944]: time="2025-02-13T15:17:33.068850157Z" level=info msg="CreateContainer within sandbox \"c2c7ae445fb8b783f64daca2cc4acfd479999a507911ef977a0ba0101e35c44f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fc29daa2d5ef35afc15d13bcb2e3a1365717e0a3a3be97b26dde148c4f8b4463\"" Feb 13 15:17:33.071144 containerd[1944]: time="2025-02-13T15:17:33.069607657Z" level=info msg="StartContainer for \"fc29daa2d5ef35afc15d13bcb2e3a1365717e0a3a3be97b26dde148c4f8b4463\"" Feb 13 15:17:33.110949 kubelet[2860]: I0213 15:17:33.110571 2860 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-87" Feb 13 15:17:33.114044 kubelet[2860]: E0213 15:17:33.113977 2860 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.87:6443/api/v1/nodes\": dial tcp 172.31.28.87:6443: connect: connection refused" node="ip-172-31-28-87" Feb 13 15:17:33.117983 systemd[1]: Started cri-containerd-f4bd480fd36f168da794113f185f6867ab8542b6f3d2da1c82cbf9b89dfeb2ff.scope - libcontainer container f4bd480fd36f168da794113f185f6867ab8542b6f3d2da1c82cbf9b89dfeb2ff. Feb 13 15:17:33.163391 systemd[1]: Started cri-containerd-fc29daa2d5ef35afc15d13bcb2e3a1365717e0a3a3be97b26dde148c4f8b4463.scope - libcontainer container fc29daa2d5ef35afc15d13bcb2e3a1365717e0a3a3be97b26dde148c4f8b4463. Feb 13 15:17:33.178423 systemd[1]: Started cri-containerd-671658b527bdce86e0cab1cd4bb918add15648b5e100863db1d55db2cf918bc1.scope - libcontainer container 671658b527bdce86e0cab1cd4bb918add15648b5e100863db1d55db2cf918bc1. Feb 13 15:17:33.258043 containerd[1944]: time="2025-02-13T15:17:33.257815202Z" level=info msg="StartContainer for \"f4bd480fd36f168da794113f185f6867ab8542b6f3d2da1c82cbf9b89dfeb2ff\" returns successfully" Feb 13 15:17:33.281848 containerd[1944]: time="2025-02-13T15:17:33.281743538Z" level=info msg="StartContainer for \"fc29daa2d5ef35afc15d13bcb2e3a1365717e0a3a3be97b26dde148c4f8b4463\" returns successfully" Feb 13 15:17:33.345497 containerd[1944]: time="2025-02-13T15:17:33.345221390Z" level=info msg="StartContainer for \"671658b527bdce86e0cab1cd4bb918add15648b5e100863db1d55db2cf918bc1\" returns successfully" Feb 13 15:17:34.718410 kubelet[2860]: I0213 15:17:34.718355 2860 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-87" Feb 13 15:17:37.320285 kubelet[2860]: E0213 15:17:37.320221 2860 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-87\" not found" node="ip-172-31-28-87" Feb 13 15:17:37.362138 kubelet[2860]: I0213 15:17:37.362072 2860 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-28-87" Feb 13 15:17:37.575415 kubelet[2860]: I0213 15:17:37.575268 2860 apiserver.go:52] "Watching apiserver" Feb 13 15:17:37.601387 kubelet[2860]: I0213 15:17:37.601308 2860 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:17:39.536844 systemd[1]: Reloading requested from client PID 3141 ('systemctl') (unit session-7.scope)... Feb 13 15:17:39.536912 systemd[1]: Reloading... Feb 13 15:17:39.806163 zram_generator::config[3184]: No configuration found. Feb 13 15:17:40.057399 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:17:40.265550 systemd[1]: Reloading finished in 727 ms. Feb 13 15:17:40.354782 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:40.355416 kubelet[2860]: E0213 15:17:40.354763 2860 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ip-172-31-28-87.1823cd84da397fdd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-87,UID:ip-172-31-28-87,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-87,},FirstTimestamp:2025-02-13 15:17:31.575721949 +0000 UTC m=+1.999068299,LastTimestamp:2025-02-13 15:17:31.575721949 +0000 UTC m=+1.999068299,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-87,}" Feb 13 15:17:40.375888 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:17:40.378231 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:40.378331 systemd[1]: kubelet.service: Consumed 2.635s CPU time, 112.9M memory peak, 0B memory swap peak. Feb 13 15:17:40.394360 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:40.699449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:40.710802 (kubelet)[3242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:17:40.801355 kubelet[3242]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:17:40.801355 kubelet[3242]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:17:40.801355 kubelet[3242]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:17:40.801355 kubelet[3242]: I0213 15:17:40.800199 3242 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:17:40.818688 kubelet[3242]: I0213 15:17:40.818560 3242 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:17:40.818688 kubelet[3242]: I0213 15:17:40.818606 3242 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:17:40.819573 kubelet[3242]: I0213 15:17:40.818967 3242 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:17:40.823289 kubelet[3242]: I0213 15:17:40.821915 3242 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:17:40.824508 kubelet[3242]: I0213 15:17:40.824446 3242 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:17:40.839020 kubelet[3242]: I0213 15:17:40.838967 3242 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:17:40.840323 kubelet[3242]: I0213 15:17:40.839454 3242 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:17:40.840323 kubelet[3242]: I0213 15:17:40.839501 3242 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-87","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:17:40.840323 kubelet[3242]: I0213 15:17:40.839803 3242 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:17:40.840323 kubelet[3242]: I0213 15:17:40.839823 3242 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:17:40.842260 kubelet[3242]: I0213 15:17:40.840774 3242 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:17:40.842260 kubelet[3242]: I0213 15:17:40.841643 3242 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:17:40.842260 kubelet[3242]: I0213 15:17:40.841673 3242 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:17:40.842260 kubelet[3242]: I0213 15:17:40.841746 3242 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:17:40.842260 kubelet[3242]: I0213 15:17:40.841787 3242 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:17:40.845978 kubelet[3242]: I0213 15:17:40.844667 3242 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:17:40.845978 kubelet[3242]: I0213 15:17:40.845006 3242 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:17:40.848259 kubelet[3242]: I0213 15:17:40.847248 3242 server.go:1264] "Started kubelet" Feb 13 15:17:40.851185 kubelet[3242]: I0213 15:17:40.851102 3242 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:17:40.869810 sudo[3255]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:17:40.870749 sudo[3255]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:17:40.879510 kubelet[3242]: I0213 15:17:40.876807 3242 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:17:40.881601 kubelet[3242]: I0213 15:17:40.881313 3242 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:17:40.885251 kubelet[3242]: I0213 15:17:40.884866 3242 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:17:40.885399 kubelet[3242]: I0213 15:17:40.885340 3242 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:17:40.897944 kubelet[3242]: I0213 15:17:40.894667 3242 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:17:40.906151 kubelet[3242]: I0213 15:17:40.899339 3242 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:17:40.906151 kubelet[3242]: I0213 15:17:40.899646 3242 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:17:40.910194 kubelet[3242]: I0213 15:17:40.910027 3242 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:17:40.912860 kubelet[3242]: I0213 15:17:40.912803 3242 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:17:40.913163 kubelet[3242]: I0213 15:17:40.912871 3242 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:17:40.913163 kubelet[3242]: I0213 15:17:40.912905 3242 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:17:40.913163 kubelet[3242]: E0213 15:17:40.913005 3242 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:17:40.924148 kubelet[3242]: I0213 15:17:40.922766 3242 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:17:40.939003 kubelet[3242]: E0213 15:17:40.938951 3242 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:17:40.944145 kubelet[3242]: I0213 15:17:40.943856 3242 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:17:40.944145 kubelet[3242]: I0213 15:17:40.943896 3242 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:17:41.017069 kubelet[3242]: E0213 15:17:41.017008 3242 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:17:41.057554 kubelet[3242]: I0213 15:17:41.057501 3242 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-87" Feb 13 15:17:41.110452 kubelet[3242]: I0213 15:17:41.110399 3242 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-28-87" Feb 13 15:17:41.114565 kubelet[3242]: I0213 15:17:41.114445 3242 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-28-87" Feb 13 15:17:41.174582 kubelet[3242]: I0213 15:17:41.170874 3242 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:17:41.174582 kubelet[3242]: I0213 15:17:41.170910 3242 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:17:41.174582 kubelet[3242]: I0213 15:17:41.170947 3242 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:17:41.174582 kubelet[3242]: I0213 15:17:41.171286 3242 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:17:41.174582 kubelet[3242]: I0213 15:17:41.171307 3242 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:17:41.174582 kubelet[3242]: I0213 15:17:41.171348 3242 policy_none.go:49] "None policy: Start" Feb 13 15:17:41.174582 kubelet[3242]: I0213 15:17:41.173694 3242 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:17:41.174582 kubelet[3242]: I0213 15:17:41.173736 3242 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:17:41.174582 kubelet[3242]: I0213 15:17:41.174491 3242 state_mem.go:75] "Updated machine memory state" Feb 13 15:17:41.191423 kubelet[3242]: I0213 15:17:41.190329 3242 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:17:41.191720 kubelet[3242]: I0213 15:17:41.190767 3242 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:17:41.194287 kubelet[3242]: I0213 15:17:41.192098 3242 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:17:41.248010 kubelet[3242]: I0213 15:17:41.247693 3242 topology_manager.go:215] "Topology Admit Handler" podUID="ba65078f5c4522b15f98b3a119e4cdd4" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-28-87" Feb 13 15:17:41.248010 kubelet[3242]: I0213 15:17:41.247849 3242 topology_manager.go:215] "Topology Admit Handler" podUID="ff79ece3ec5d80ee16d3a9c4c01f1a8c" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-28-87" Feb 13 15:17:41.248010 kubelet[3242]: I0213 15:17:41.247927 3242 topology_manager.go:215] "Topology Admit Handler" podUID="6ba55ec4881437a6c544e921ab28a4e8" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-28-87" Feb 13 15:17:41.307886 kubelet[3242]: I0213 15:17:41.304559 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ba55ec4881437a6c544e921ab28a4e8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-87\" (UID: \"6ba55ec4881437a6c544e921ab28a4e8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-87" Feb 13 15:17:41.307886 kubelet[3242]: I0213 15:17:41.307205 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff79ece3ec5d80ee16d3a9c4c01f1a8c-ca-certs\") pod \"kube-apiserver-ip-172-31-28-87\" (UID: \"ff79ece3ec5d80ee16d3a9c4c01f1a8c\") " pod="kube-system/kube-apiserver-ip-172-31-28-87" Feb 13 15:17:41.307886 kubelet[3242]: I0213 15:17:41.307305 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ff79ece3ec5d80ee16d3a9c4c01f1a8c-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-87\" (UID: \"ff79ece3ec5d80ee16d3a9c4c01f1a8c\") " pod="kube-system/kube-apiserver-ip-172-31-28-87" Feb 13 15:17:41.307886 kubelet[3242]: I0213 15:17:41.307396 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ba55ec4881437a6c544e921ab28a4e8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-87\" (UID: \"6ba55ec4881437a6c544e921ab28a4e8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-87" Feb 13 15:17:41.308541 kubelet[3242]: I0213 15:17:41.307993 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6ba55ec4881437a6c544e921ab28a4e8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-87\" (UID: \"6ba55ec4881437a6c544e921ab28a4e8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-87" Feb 13 15:17:41.308541 kubelet[3242]: I0213 15:17:41.308051 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ba55ec4881437a6c544e921ab28a4e8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-87\" (UID: \"6ba55ec4881437a6c544e921ab28a4e8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-87" Feb 13 15:17:41.308541 kubelet[3242]: I0213 15:17:41.308094 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ba65078f5c4522b15f98b3a119e4cdd4-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-87\" (UID: \"ba65078f5c4522b15f98b3a119e4cdd4\") " pod="kube-system/kube-scheduler-ip-172-31-28-87" Feb 13 15:17:41.308541 kubelet[3242]: I0213 15:17:41.308175 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff79ece3ec5d80ee16d3a9c4c01f1a8c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-87\" (UID: \"ff79ece3ec5d80ee16d3a9c4c01f1a8c\") " pod="kube-system/kube-apiserver-ip-172-31-28-87" Feb 13 15:17:41.308541 kubelet[3242]: I0213 15:17:41.308217 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ba55ec4881437a6c544e921ab28a4e8-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-87\" (UID: \"6ba55ec4881437a6c544e921ab28a4e8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-87" Feb 13 15:17:41.845149 kubelet[3242]: I0213 15:17:41.843699 3242 apiserver.go:52] "Watching apiserver" Feb 13 15:17:41.859556 sudo[3255]: pam_unix(sudo:session): session closed for user root Feb 13 15:17:41.899626 kubelet[3242]: I0213 15:17:41.899560 3242 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:17:42.026136 kubelet[3242]: E0213 15:17:42.026066 3242 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-28-87\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-87" Feb 13 15:17:42.040862 kubelet[3242]: I0213 15:17:42.039270 3242 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-87" podStartSLOduration=1.039231237 podStartE2EDuration="1.039231237s" podCreationTimestamp="2025-02-13 15:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:17:42.038682525 +0000 UTC m=+1.318380283" watchObservedRunningTime="2025-02-13 15:17:42.039231237 +0000 UTC m=+1.318929007" Feb 13 15:17:42.061671 kubelet[3242]: I0213 15:17:42.061511 3242 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-87" podStartSLOduration=1.061467141 podStartE2EDuration="1.061467141s" podCreationTimestamp="2025-02-13 15:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:17:42.058878105 +0000 UTC m=+1.338575875" watchObservedRunningTime="2025-02-13 15:17:42.061467141 +0000 UTC m=+1.341164947" Feb 13 15:17:42.103564 kubelet[3242]: I0213 15:17:42.102585 3242 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-87" podStartSLOduration=1.10256377 podStartE2EDuration="1.10256377s" podCreationTimestamp="2025-02-13 15:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:17:42.082401885 +0000 UTC m=+1.362099643" watchObservedRunningTime="2025-02-13 15:17:42.10256377 +0000 UTC m=+1.382261540" Feb 13 15:17:43.814727 sudo[2256]: pam_unix(sudo:session): session closed for user root Feb 13 15:17:43.838199 sshd[2255]: Connection closed by 139.178.68.195 port 34146 Feb 13 15:17:43.839184 sshd-session[2253]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:43.846555 systemd[1]: sshd@6-172.31.28.87:22-139.178.68.195:34146.service: Deactivated successfully. Feb 13 15:17:43.850721 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:17:43.851203 systemd[1]: session-7.scope: Consumed 9.753s CPU time, 187.7M memory peak, 0B memory swap peak. Feb 13 15:17:43.852436 systemd-logind[1917]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:17:43.854998 systemd-logind[1917]: Removed session 7. Feb 13 15:17:45.041417 update_engine[1918]: I20250213 15:17:45.041292 1918 update_attempter.cc:509] Updating boot flags... Feb 13 15:17:45.133221 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3327) Feb 13 15:17:45.389162 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3328) Feb 13 15:17:45.651718 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3328) Feb 13 15:17:54.315276 kubelet[3242]: I0213 15:17:54.314638 3242 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:17:54.316214 containerd[1944]: time="2025-02-13T15:17:54.315170074Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:17:54.316832 kubelet[3242]: I0213 15:17:54.315498 3242 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:17:55.206273 kubelet[3242]: I0213 15:17:55.206193 3242 topology_manager.go:215] "Topology Admit Handler" podUID="d3134349-d0fc-43a4-b22e-98bfa78b077d" podNamespace="kube-system" podName="kube-proxy-gtwrg" Feb 13 15:17:55.226435 systemd[1]: Created slice kubepods-besteffort-podd3134349_d0fc_43a4_b22e_98bfa78b077d.slice - libcontainer container kubepods-besteffort-podd3134349_d0fc_43a4_b22e_98bfa78b077d.slice. Feb 13 15:17:55.244186 kubelet[3242]: I0213 15:17:55.242082 3242 topology_manager.go:215] "Topology Admit Handler" podUID="98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" podNamespace="kube-system" podName="cilium-p5gml" Feb 13 15:17:55.260380 systemd[1]: Created slice kubepods-burstable-pod98afc6d9_6fdd_4efe_9960_70f04a9b2ea8.slice - libcontainer container kubepods-burstable-pod98afc6d9_6fdd_4efe_9960_70f04a9b2ea8.slice. Feb 13 15:17:55.300168 kubelet[3242]: I0213 15:17:55.300101 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-lib-modules\") pod \"cilium-p5gml\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " pod="kube-system/cilium-p5gml" Feb 13 15:17:55.300506 kubelet[3242]: I0213 15:17:55.300460 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-cilium-config-path\") pod \"cilium-p5gml\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " pod="kube-system/cilium-p5gml" Feb 13 15:17:55.300730 kubelet[3242]: I0213 15:17:55.300697 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmb5z\" (UniqueName: \"kubernetes.io/projected/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-kube-api-access-jmb5z\") pod \"cilium-p5gml\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " pod="kube-system/cilium-p5gml" Feb 13 15:17:55.300930 kubelet[3242]: I0213 15:17:55.300900 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d3134349-d0fc-43a4-b22e-98bfa78b077d-kube-proxy\") pod \"kube-proxy-gtwrg\" (UID: \"d3134349-d0fc-43a4-b22e-98bfa78b077d\") " pod="kube-system/kube-proxy-gtwrg" Feb 13 15:17:55.302137 kubelet[3242]: I0213 15:17:55.301141 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-hostproc\") pod \"cilium-p5gml\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " pod="kube-system/cilium-p5gml" Feb 13 15:17:55.302137 kubelet[3242]: I0213 15:17:55.301203 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-cilium-cgroup\") pod \"cilium-p5gml\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " pod="kube-system/cilium-p5gml" Feb 13 15:17:55.302137 kubelet[3242]: I0213 15:17:55.301246 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4rr8\" (UniqueName: \"kubernetes.io/projected/d3134349-d0fc-43a4-b22e-98bfa78b077d-kube-api-access-t4rr8\") pod \"kube-proxy-gtwrg\" (UID: \"d3134349-d0fc-43a4-b22e-98bfa78b077d\") " pod="kube-system/kube-proxy-gtwrg" Feb 13 15:17:55.302137 kubelet[3242]: I0213 15:17:55.301281 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-host-proc-sys-net\") pod \"cilium-p5gml\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " pod="kube-system/cilium-p5gml" Feb 13 15:17:55.302137 kubelet[3242]: I0213 15:17:55.301324 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-hubble-tls\") pod \"cilium-p5gml\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " pod="kube-system/cilium-p5gml" Feb 13 15:17:55.302137 kubelet[3242]: I0213 15:17:55.301359 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-cni-path\") pod \"cilium-p5gml\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " pod="kube-system/cilium-p5gml" Feb 13 15:17:55.302635 kubelet[3242]: I0213 15:17:55.301413 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-host-proc-sys-kernel\") pod \"cilium-p5gml\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " pod="kube-system/cilium-p5gml" Feb 13 15:17:55.302635 kubelet[3242]: I0213 15:17:55.301452 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3134349-d0fc-43a4-b22e-98bfa78b077d-lib-modules\") pod \"kube-proxy-gtwrg\" (UID: \"d3134349-d0fc-43a4-b22e-98bfa78b077d\") " pod="kube-system/kube-proxy-gtwrg" Feb 13 15:17:55.302635 kubelet[3242]: I0213 15:17:55.301491 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-bpf-maps\") pod \"cilium-p5gml\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " pod="kube-system/cilium-p5gml" Feb 13 15:17:55.302635 kubelet[3242]: I0213 15:17:55.301525 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-clustermesh-secrets\") pod \"cilium-p5gml\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " pod="kube-system/cilium-p5gml" Feb 13 15:17:55.302635 kubelet[3242]: I0213 15:17:55.301566 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3134349-d0fc-43a4-b22e-98bfa78b077d-xtables-lock\") pod \"kube-proxy-gtwrg\" (UID: \"d3134349-d0fc-43a4-b22e-98bfa78b077d\") " pod="kube-system/kube-proxy-gtwrg" Feb 13 15:17:55.302635 kubelet[3242]: I0213 15:17:55.301608 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-cilium-run\") pod \"cilium-p5gml\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " pod="kube-system/cilium-p5gml" Feb 13 15:17:55.302934 kubelet[3242]: I0213 15:17:55.301644 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-etc-cni-netd\") pod \"cilium-p5gml\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " pod="kube-system/cilium-p5gml" Feb 13 15:17:55.302934 kubelet[3242]: I0213 15:17:55.301680 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-xtables-lock\") pod \"cilium-p5gml\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " pod="kube-system/cilium-p5gml" Feb 13 15:17:55.438143 kubelet[3242]: I0213 15:17:55.436179 3242 topology_manager.go:215] "Topology Admit Handler" podUID="9e40acc5-4248-4d92-a53e-8a1e597d356d" podNamespace="kube-system" podName="cilium-operator-599987898-597sd" Feb 13 15:17:55.466448 systemd[1]: Created slice kubepods-besteffort-pod9e40acc5_4248_4d92_a53e_8a1e597d356d.slice - libcontainer container kubepods-besteffort-pod9e40acc5_4248_4d92_a53e_8a1e597d356d.slice. Feb 13 15:17:55.503675 kubelet[3242]: I0213 15:17:55.503628 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e40acc5-4248-4d92-a53e-8a1e597d356d-cilium-config-path\") pod \"cilium-operator-599987898-597sd\" (UID: \"9e40acc5-4248-4d92-a53e-8a1e597d356d\") " pod="kube-system/cilium-operator-599987898-597sd" Feb 13 15:17:55.503930 kubelet[3242]: I0213 15:17:55.503901 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d258x\" (UniqueName: \"kubernetes.io/projected/9e40acc5-4248-4d92-a53e-8a1e597d356d-kube-api-access-d258x\") pod \"cilium-operator-599987898-597sd\" (UID: \"9e40acc5-4248-4d92-a53e-8a1e597d356d\") " pod="kube-system/cilium-operator-599987898-597sd" Feb 13 15:17:55.538868 containerd[1944]: time="2025-02-13T15:17:55.538818564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gtwrg,Uid:d3134349-d0fc-43a4-b22e-98bfa78b077d,Namespace:kube-system,Attempt:0,}" Feb 13 15:17:55.571535 containerd[1944]: time="2025-02-13T15:17:55.571067352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p5gml,Uid:98afc6d9-6fdd-4efe-9960-70f04a9b2ea8,Namespace:kube-system,Attempt:0,}" Feb 13 15:17:55.601337 containerd[1944]: time="2025-02-13T15:17:55.601173853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:17:55.601484 containerd[1944]: time="2025-02-13T15:17:55.601287001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:17:55.607730 containerd[1944]: time="2025-02-13T15:17:55.603227461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:55.607730 containerd[1944]: time="2025-02-13T15:17:55.603407089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:55.647857 containerd[1944]: time="2025-02-13T15:17:55.647542909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:17:55.647857 containerd[1944]: time="2025-02-13T15:17:55.647635237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:17:55.647857 containerd[1944]: time="2025-02-13T15:17:55.647661817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:55.648852 containerd[1944]: time="2025-02-13T15:17:55.648248749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:55.672466 systemd[1]: Started cri-containerd-47f928058bcb89ff165472ee5c950d173b5c0ae1a728b33f37fab9c5f8a838f3.scope - libcontainer container 47f928058bcb89ff165472ee5c950d173b5c0ae1a728b33f37fab9c5f8a838f3. Feb 13 15:17:55.688856 systemd[1]: Started cri-containerd-35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c.scope - libcontainer container 35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c. Feb 13 15:17:55.750163 containerd[1944]: time="2025-02-13T15:17:55.749767093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gtwrg,Uid:d3134349-d0fc-43a4-b22e-98bfa78b077d,Namespace:kube-system,Attempt:0,} returns sandbox id \"47f928058bcb89ff165472ee5c950d173b5c0ae1a728b33f37fab9c5f8a838f3\"" Feb 13 15:17:55.760551 containerd[1944]: time="2025-02-13T15:17:55.760298449Z" level=info msg="CreateContainer within sandbox \"47f928058bcb89ff165472ee5c950d173b5c0ae1a728b33f37fab9c5f8a838f3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:17:55.762633 containerd[1944]: time="2025-02-13T15:17:55.762526525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p5gml,Uid:98afc6d9-6fdd-4efe-9960-70f04a9b2ea8,Namespace:kube-system,Attempt:0,} returns sandbox id \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\"" Feb 13 15:17:55.771146 containerd[1944]: time="2025-02-13T15:17:55.770672245Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:17:55.790777 containerd[1944]: time="2025-02-13T15:17:55.790719938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-597sd,Uid:9e40acc5-4248-4d92-a53e-8a1e597d356d,Namespace:kube-system,Attempt:0,}" Feb 13 15:17:55.813581 containerd[1944]: time="2025-02-13T15:17:55.813471878Z" level=info msg="CreateContainer within sandbox \"47f928058bcb89ff165472ee5c950d173b5c0ae1a728b33f37fab9c5f8a838f3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bd65f8c4dcda72b7035882aa578f5459bca81cfa226bfdd9bc01f86e28c7aa2f\"" Feb 13 15:17:55.816194 containerd[1944]: time="2025-02-13T15:17:55.815240234Z" level=info msg="StartContainer for \"bd65f8c4dcda72b7035882aa578f5459bca81cfa226bfdd9bc01f86e28c7aa2f\"" Feb 13 15:17:55.856104 containerd[1944]: time="2025-02-13T15:17:55.855409058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:17:55.856104 containerd[1944]: time="2025-02-13T15:17:55.855540278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:17:55.856104 containerd[1944]: time="2025-02-13T15:17:55.855570182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:55.857293 containerd[1944]: time="2025-02-13T15:17:55.857058074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:55.878430 systemd[1]: Started cri-containerd-bd65f8c4dcda72b7035882aa578f5459bca81cfa226bfdd9bc01f86e28c7aa2f.scope - libcontainer container bd65f8c4dcda72b7035882aa578f5459bca81cfa226bfdd9bc01f86e28c7aa2f. Feb 13 15:17:55.902440 systemd[1]: Started cri-containerd-afab6a2da11710e73a71172ac99bdf5b5b0f998c4f9fca1a765fb7d7b8d9928a.scope - libcontainer container afab6a2da11710e73a71172ac99bdf5b5b0f998c4f9fca1a765fb7d7b8d9928a. Feb 13 15:17:55.968733 containerd[1944]: time="2025-02-13T15:17:55.968466734Z" level=info msg="StartContainer for \"bd65f8c4dcda72b7035882aa578f5459bca81cfa226bfdd9bc01f86e28c7aa2f\" returns successfully" Feb 13 15:17:56.004213 containerd[1944]: time="2025-02-13T15:17:56.003374207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-597sd,Uid:9e40acc5-4248-4d92-a53e-8a1e597d356d,Namespace:kube-system,Attempt:0,} returns sandbox id \"afab6a2da11710e73a71172ac99bdf5b5b0f998c4f9fca1a765fb7d7b8d9928a\"" Feb 13 15:18:01.131082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount197729052.mount: Deactivated successfully. Feb 13 15:18:05.148208 containerd[1944]: time="2025-02-13T15:18:05.147466532Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:05.149969 containerd[1944]: time="2025-02-13T15:18:05.149860400Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 15:18:05.152144 containerd[1944]: time="2025-02-13T15:18:05.152042204Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:05.160194 containerd[1944]: time="2025-02-13T15:18:05.159465248Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.388690403s" Feb 13 15:18:05.160194 containerd[1944]: time="2025-02-13T15:18:05.159553340Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 15:18:05.165787 containerd[1944]: time="2025-02-13T15:18:05.165639188Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:18:05.170104 containerd[1944]: time="2025-02-13T15:18:05.169696340Z" level=info msg="CreateContainer within sandbox \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:18:05.201327 containerd[1944]: time="2025-02-13T15:18:05.201152204Z" level=info msg="CreateContainer within sandbox \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c2b1af9fb6ed9b31c49b75ca79bcdfd5e22f97405e46b1456f8d5fc3d19834d4\"" Feb 13 15:18:05.203326 containerd[1944]: time="2025-02-13T15:18:05.201843728Z" level=info msg="StartContainer for \"c2b1af9fb6ed9b31c49b75ca79bcdfd5e22f97405e46b1456f8d5fc3d19834d4\"" Feb 13 15:18:05.259447 systemd[1]: Started cri-containerd-c2b1af9fb6ed9b31c49b75ca79bcdfd5e22f97405e46b1456f8d5fc3d19834d4.scope - libcontainer container c2b1af9fb6ed9b31c49b75ca79bcdfd5e22f97405e46b1456f8d5fc3d19834d4. Feb 13 15:18:05.315541 containerd[1944]: time="2025-02-13T15:18:05.315419781Z" level=info msg="StartContainer for \"c2b1af9fb6ed9b31c49b75ca79bcdfd5e22f97405e46b1456f8d5fc3d19834d4\" returns successfully" Feb 13 15:18:05.335664 systemd[1]: cri-containerd-c2b1af9fb6ed9b31c49b75ca79bcdfd5e22f97405e46b1456f8d5fc3d19834d4.scope: Deactivated successfully. Feb 13 15:18:06.131153 kubelet[3242]: I0213 15:18:06.129630 3242 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gtwrg" podStartSLOduration=11.129609381 podStartE2EDuration="11.129609381s" podCreationTimestamp="2025-02-13 15:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:17:56.062897699 +0000 UTC m=+15.342595469" watchObservedRunningTime="2025-02-13 15:18:06.129609381 +0000 UTC m=+25.409307151" Feb 13 15:18:06.190259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2b1af9fb6ed9b31c49b75ca79bcdfd5e22f97405e46b1456f8d5fc3d19834d4-rootfs.mount: Deactivated successfully. Feb 13 15:18:06.364679 containerd[1944]: time="2025-02-13T15:18:06.364302322Z" level=info msg="shim disconnected" id=c2b1af9fb6ed9b31c49b75ca79bcdfd5e22f97405e46b1456f8d5fc3d19834d4 namespace=k8s.io Feb 13 15:18:06.364679 containerd[1944]: time="2025-02-13T15:18:06.364388218Z" level=warning msg="cleaning up after shim disconnected" id=c2b1af9fb6ed9b31c49b75ca79bcdfd5e22f97405e46b1456f8d5fc3d19834d4 namespace=k8s.io Feb 13 15:18:06.364679 containerd[1944]: time="2025-02-13T15:18:06.364408738Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:07.112128 containerd[1944]: time="2025-02-13T15:18:07.111896842Z" level=info msg="CreateContainer within sandbox \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:18:07.159512 containerd[1944]: time="2025-02-13T15:18:07.159432238Z" level=info msg="CreateContainer within sandbox \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7c3e85c1ab6ca226007cdc447b4bbdd99469aec95676bc7baa7689d2c5033e60\"" Feb 13 15:18:07.167178 containerd[1944]: time="2025-02-13T15:18:07.166711030Z" level=info msg="StartContainer for \"7c3e85c1ab6ca226007cdc447b4bbdd99469aec95676bc7baa7689d2c5033e60\"" Feb 13 15:18:07.269750 systemd[1]: Started cri-containerd-7c3e85c1ab6ca226007cdc447b4bbdd99469aec95676bc7baa7689d2c5033e60.scope - libcontainer container 7c3e85c1ab6ca226007cdc447b4bbdd99469aec95676bc7baa7689d2c5033e60. Feb 13 15:18:07.277068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3321295115.mount: Deactivated successfully. Feb 13 15:18:07.346167 containerd[1944]: time="2025-02-13T15:18:07.345552779Z" level=info msg="StartContainer for \"7c3e85c1ab6ca226007cdc447b4bbdd99469aec95676bc7baa7689d2c5033e60\" returns successfully" Feb 13 15:18:07.368773 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:18:07.370589 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:18:07.370947 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:18:07.377880 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:18:07.383505 systemd[1]: cri-containerd-7c3e85c1ab6ca226007cdc447b4bbdd99469aec95676bc7baa7689d2c5033e60.scope: Deactivated successfully. Feb 13 15:18:07.424973 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:18:07.457574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c3e85c1ab6ca226007cdc447b4bbdd99469aec95676bc7baa7689d2c5033e60-rootfs.mount: Deactivated successfully. Feb 13 15:18:07.479610 containerd[1944]: time="2025-02-13T15:18:07.479427060Z" level=info msg="shim disconnected" id=7c3e85c1ab6ca226007cdc447b4bbdd99469aec95676bc7baa7689d2c5033e60 namespace=k8s.io Feb 13 15:18:07.479610 containerd[1944]: time="2025-02-13T15:18:07.479508216Z" level=warning msg="cleaning up after shim disconnected" id=7c3e85c1ab6ca226007cdc447b4bbdd99469aec95676bc7baa7689d2c5033e60 namespace=k8s.io Feb 13 15:18:07.479610 containerd[1944]: time="2025-02-13T15:18:07.479532828Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:07.988480 containerd[1944]: time="2025-02-13T15:18:07.988398410Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:07.990603 containerd[1944]: time="2025-02-13T15:18:07.990492266Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 15:18:07.994171 containerd[1944]: time="2025-02-13T15:18:07.993249494Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:07.997519 containerd[1944]: time="2025-02-13T15:18:07.997453130Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.831720714s" Feb 13 15:18:07.997693 containerd[1944]: time="2025-02-13T15:18:07.997516466Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 15:18:08.002445 containerd[1944]: time="2025-02-13T15:18:08.002390038Z" level=info msg="CreateContainer within sandbox \"afab6a2da11710e73a71172ac99bdf5b5b0f998c4f9fca1a765fb7d7b8d9928a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:18:08.028812 containerd[1944]: time="2025-02-13T15:18:08.028761634Z" level=info msg="CreateContainer within sandbox \"afab6a2da11710e73a71172ac99bdf5b5b0f998c4f9fca1a765fb7d7b8d9928a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548\"" Feb 13 15:18:08.031179 containerd[1944]: time="2025-02-13T15:18:08.030380470Z" level=info msg="StartContainer for \"3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548\"" Feb 13 15:18:08.076465 systemd[1]: Started cri-containerd-3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548.scope - libcontainer container 3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548. Feb 13 15:18:08.131143 containerd[1944]: time="2025-02-13T15:18:08.130436795Z" level=info msg="CreateContainer within sandbox \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:18:08.142360 containerd[1944]: time="2025-02-13T15:18:08.142304663Z" level=info msg="StartContainer for \"3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548\" returns successfully" Feb 13 15:18:08.180943 containerd[1944]: time="2025-02-13T15:18:08.180848687Z" level=info msg="CreateContainer within sandbox \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"80b88aa1c67283b25d66db15e363e5311b4977a995932009099ec28e2649b81d\"" Feb 13 15:18:08.183504 containerd[1944]: time="2025-02-13T15:18:08.183435035Z" level=info msg="StartContainer for \"80b88aa1c67283b25d66db15e363e5311b4977a995932009099ec28e2649b81d\"" Feb 13 15:18:08.275475 systemd[1]: Started cri-containerd-80b88aa1c67283b25d66db15e363e5311b4977a995932009099ec28e2649b81d.scope - libcontainer container 80b88aa1c67283b25d66db15e363e5311b4977a995932009099ec28e2649b81d. Feb 13 15:18:08.375341 containerd[1944]: time="2025-02-13T15:18:08.375257220Z" level=info msg="StartContainer for \"80b88aa1c67283b25d66db15e363e5311b4977a995932009099ec28e2649b81d\" returns successfully" Feb 13 15:18:08.395059 systemd[1]: cri-containerd-80b88aa1c67283b25d66db15e363e5311b4977a995932009099ec28e2649b81d.scope: Deactivated successfully. Feb 13 15:18:08.465016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80b88aa1c67283b25d66db15e363e5311b4977a995932009099ec28e2649b81d-rootfs.mount: Deactivated successfully. Feb 13 15:18:08.548165 containerd[1944]: time="2025-02-13T15:18:08.547936705Z" level=info msg="shim disconnected" id=80b88aa1c67283b25d66db15e363e5311b4977a995932009099ec28e2649b81d namespace=k8s.io Feb 13 15:18:08.548165 containerd[1944]: time="2025-02-13T15:18:08.548018701Z" level=warning msg="cleaning up after shim disconnected" id=80b88aa1c67283b25d66db15e363e5311b4977a995932009099ec28e2649b81d namespace=k8s.io Feb 13 15:18:08.548165 containerd[1944]: time="2025-02-13T15:18:08.548039941Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:09.142531 containerd[1944]: time="2025-02-13T15:18:09.142465092Z" level=info msg="CreateContainer within sandbox \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:18:09.169738 containerd[1944]: time="2025-02-13T15:18:09.169556988Z" level=info msg="CreateContainer within sandbox \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36\"" Feb 13 15:18:09.173247 containerd[1944]: time="2025-02-13T15:18:09.170361252Z" level=info msg="StartContainer for \"4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36\"" Feb 13 15:18:09.283667 systemd[1]: run-containerd-runc-k8s.io-4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36-runc.yFFxZh.mount: Deactivated successfully. Feb 13 15:18:09.301692 systemd[1]: Started cri-containerd-4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36.scope - libcontainer container 4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36. Feb 13 15:18:09.363509 kubelet[3242]: I0213 15:18:09.363410 3242 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-597sd" podStartSLOduration=2.373890842 podStartE2EDuration="14.363367909s" podCreationTimestamp="2025-02-13 15:17:55 +0000 UTC" firstStartedPulling="2025-02-13 15:17:56.008968103 +0000 UTC m=+15.288665873" lastFinishedPulling="2025-02-13 15:18:07.998445182 +0000 UTC m=+27.278142940" observedRunningTime="2025-02-13 15:18:09.19535568 +0000 UTC m=+28.475053474" watchObservedRunningTime="2025-02-13 15:18:09.363367909 +0000 UTC m=+28.643065703" Feb 13 15:18:09.425864 systemd[1]: cri-containerd-4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36.scope: Deactivated successfully. Feb 13 15:18:09.427955 containerd[1944]: time="2025-02-13T15:18:09.427560589Z" level=info msg="StartContainer for \"4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36\" returns successfully" Feb 13 15:18:09.496749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36-rootfs.mount: Deactivated successfully. Feb 13 15:18:09.509269 containerd[1944]: time="2025-02-13T15:18:09.509171102Z" level=info msg="shim disconnected" id=4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36 namespace=k8s.io Feb 13 15:18:09.510386 containerd[1944]: time="2025-02-13T15:18:09.509331974Z" level=warning msg="cleaning up after shim disconnected" id=4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36 namespace=k8s.io Feb 13 15:18:09.510386 containerd[1944]: time="2025-02-13T15:18:09.509354066Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:10.160680 containerd[1944]: time="2025-02-13T15:18:10.160556185Z" level=info msg="CreateContainer within sandbox \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:18:10.203242 containerd[1944]: time="2025-02-13T15:18:10.202562365Z" level=info msg="CreateContainer within sandbox \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27\"" Feb 13 15:18:10.207199 containerd[1944]: time="2025-02-13T15:18:10.206404909Z" level=info msg="StartContainer for \"50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27\"" Feb 13 15:18:10.330797 systemd[1]: Started cri-containerd-50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27.scope - libcontainer container 50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27. Feb 13 15:18:10.409022 containerd[1944]: time="2025-02-13T15:18:10.408939722Z" level=info msg="StartContainer for \"50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27\" returns successfully" Feb 13 15:18:10.462733 systemd[1]: run-containerd-runc-k8s.io-50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27-runc.fllJ5B.mount: Deactivated successfully. Feb 13 15:18:10.608502 kubelet[3242]: I0213 15:18:10.608259 3242 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:18:10.671245 kubelet[3242]: I0213 15:18:10.668912 3242 topology_manager.go:215] "Topology Admit Handler" podUID="6e66bcfb-14e0-43d9-9556-e1265bdcc44e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-47nj4" Feb 13 15:18:10.678018 kubelet[3242]: I0213 15:18:10.677975 3242 topology_manager.go:215] "Topology Admit Handler" podUID="22f5f4b4-eeac-4484-9252-73e50c315520" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jw5rc" Feb 13 15:18:10.687458 systemd[1]: Created slice kubepods-burstable-pod6e66bcfb_14e0_43d9_9556_e1265bdcc44e.slice - libcontainer container kubepods-burstable-pod6e66bcfb_14e0_43d9_9556_e1265bdcc44e.slice. Feb 13 15:18:10.704086 systemd[1]: Created slice kubepods-burstable-pod22f5f4b4_eeac_4484_9252_73e50c315520.slice - libcontainer container kubepods-burstable-pod22f5f4b4_eeac_4484_9252_73e50c315520.slice. Feb 13 15:18:10.727921 kubelet[3242]: I0213 15:18:10.727207 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvcjk\" (UniqueName: \"kubernetes.io/projected/6e66bcfb-14e0-43d9-9556-e1265bdcc44e-kube-api-access-nvcjk\") pod \"coredns-7db6d8ff4d-47nj4\" (UID: \"6e66bcfb-14e0-43d9-9556-e1265bdcc44e\") " pod="kube-system/coredns-7db6d8ff4d-47nj4" Feb 13 15:18:10.727921 kubelet[3242]: I0213 15:18:10.727275 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22f5f4b4-eeac-4484-9252-73e50c315520-config-volume\") pod \"coredns-7db6d8ff4d-jw5rc\" (UID: \"22f5f4b4-eeac-4484-9252-73e50c315520\") " pod="kube-system/coredns-7db6d8ff4d-jw5rc" Feb 13 15:18:10.727921 kubelet[3242]: I0213 15:18:10.727334 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjhmm\" (UniqueName: \"kubernetes.io/projected/22f5f4b4-eeac-4484-9252-73e50c315520-kube-api-access-xjhmm\") pod \"coredns-7db6d8ff4d-jw5rc\" (UID: \"22f5f4b4-eeac-4484-9252-73e50c315520\") " pod="kube-system/coredns-7db6d8ff4d-jw5rc" Feb 13 15:18:10.727921 kubelet[3242]: I0213 15:18:10.727373 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e66bcfb-14e0-43d9-9556-e1265bdcc44e-config-volume\") pod \"coredns-7db6d8ff4d-47nj4\" (UID: \"6e66bcfb-14e0-43d9-9556-e1265bdcc44e\") " pod="kube-system/coredns-7db6d8ff4d-47nj4" Feb 13 15:18:10.998866 containerd[1944]: time="2025-02-13T15:18:10.997689005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-47nj4,Uid:6e66bcfb-14e0-43d9-9556-e1265bdcc44e,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:11.012288 containerd[1944]: time="2025-02-13T15:18:11.011864125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jw5rc,Uid:22f5f4b4-eeac-4484-9252-73e50c315520,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:11.211897 kubelet[3242]: I0213 15:18:11.209072 3242 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p5gml" podStartSLOduration=6.813407635 podStartE2EDuration="16.209049098s" podCreationTimestamp="2025-02-13 15:17:55 +0000 UTC" firstStartedPulling="2025-02-13 15:17:55.766063249 +0000 UTC m=+15.045761007" lastFinishedPulling="2025-02-13 15:18:05.1617047 +0000 UTC m=+24.441402470" observedRunningTime="2025-02-13 15:18:11.206744846 +0000 UTC m=+30.486442724" watchObservedRunningTime="2025-02-13 15:18:11.209049098 +0000 UTC m=+30.488746868" Feb 13 15:18:13.314816 (udev-worker)[4335]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:18:13.316426 systemd-networkd[1849]: cilium_host: Link UP Feb 13 15:18:13.316731 systemd-networkd[1849]: cilium_net: Link UP Feb 13 15:18:13.319519 systemd-networkd[1849]: cilium_net: Gained carrier Feb 13 15:18:13.319914 systemd-networkd[1849]: cilium_host: Gained carrier Feb 13 15:18:13.320185 systemd-networkd[1849]: cilium_net: Gained IPv6LL Feb 13 15:18:13.320504 systemd-networkd[1849]: cilium_host: Gained IPv6LL Feb 13 15:18:13.321676 (udev-worker)[4294]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:18:13.500177 systemd-networkd[1849]: cilium_vxlan: Link UP Feb 13 15:18:13.500199 systemd-networkd[1849]: cilium_vxlan: Gained carrier Feb 13 15:18:13.989164 kernel: NET: Registered PF_ALG protocol family Feb 13 15:18:15.207407 systemd-networkd[1849]: cilium_vxlan: Gained IPv6LL Feb 13 15:18:15.293895 (udev-worker)[4346]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:18:15.300004 systemd-networkd[1849]: lxc_health: Link UP Feb 13 15:18:15.309476 systemd-networkd[1849]: lxc_health: Gained carrier Feb 13 15:18:16.129874 systemd-networkd[1849]: lxc65645d5baf4b: Link UP Feb 13 15:18:16.139184 kernel: eth0: renamed from tmpe29df Feb 13 15:18:16.146338 systemd-networkd[1849]: lxc65645d5baf4b: Gained carrier Feb 13 15:18:16.151820 systemd-networkd[1849]: lxc9e1e7aea36ac: Link UP Feb 13 15:18:16.172149 kernel: eth0: renamed from tmp1cd6f Feb 13 15:18:16.178528 systemd-networkd[1849]: lxc9e1e7aea36ac: Gained carrier Feb 13 15:18:16.179333 (udev-worker)[4341]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:18:16.360241 systemd-networkd[1849]: lxc_health: Gained IPv6LL Feb 13 15:18:17.428894 kubelet[3242]: I0213 15:18:17.428011 3242 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:18:18.087582 systemd-networkd[1849]: lxc9e1e7aea36ac: Gained IPv6LL Feb 13 15:18:18.088038 systemd-networkd[1849]: lxc65645d5baf4b: Gained IPv6LL Feb 13 15:18:20.249884 ntpd[1909]: Listen normally on 8 cilium_host 192.168.0.62:123 Feb 13 15:18:20.250692 ntpd[1909]: 13 Feb 15:18:20 ntpd[1909]: Listen normally on 8 cilium_host 192.168.0.62:123 Feb 13 15:18:20.250692 ntpd[1909]: 13 Feb 15:18:20 ntpd[1909]: Listen normally on 9 cilium_net [fe80::c41d:20ff:fe80:9b14%4]:123 Feb 13 15:18:20.250014 ntpd[1909]: Listen normally on 9 cilium_net [fe80::c41d:20ff:fe80:9b14%4]:123 Feb 13 15:18:20.250095 ntpd[1909]: Listen normally on 10 cilium_host [fe80::1c0e:f7ff:fef5:7e18%5]:123 Feb 13 15:18:20.251837 ntpd[1909]: 13 Feb 15:18:20 ntpd[1909]: Listen normally on 10 cilium_host [fe80::1c0e:f7ff:fef5:7e18%5]:123 Feb 13 15:18:20.251837 ntpd[1909]: 13 Feb 15:18:20 ntpd[1909]: Listen normally on 11 cilium_vxlan [fe80::4893:94ff:fe0f:a740%6]:123 Feb 13 15:18:20.251837 ntpd[1909]: 13 Feb 15:18:20 ntpd[1909]: Listen normally on 12 lxc_health [fe80::fce9:85ff:fe1a:5798%8]:123 Feb 13 15:18:20.251837 ntpd[1909]: 13 Feb 15:18:20 ntpd[1909]: Listen normally on 13 lxc65645d5baf4b [fe80::d846:b5ff:fe02:65e%10]:123 Feb 13 15:18:20.251837 ntpd[1909]: 13 Feb 15:18:20 ntpd[1909]: Listen normally on 14 lxc9e1e7aea36ac [fe80::4ce7:63ff:fe85:4936%12]:123 Feb 13 15:18:20.251063 ntpd[1909]: Listen normally on 11 cilium_vxlan [fe80::4893:94ff:fe0f:a740%6]:123 Feb 13 15:18:20.251169 ntpd[1909]: Listen normally on 12 lxc_health [fe80::fce9:85ff:fe1a:5798%8]:123 Feb 13 15:18:20.251244 ntpd[1909]: Listen normally on 13 lxc65645d5baf4b [fe80::d846:b5ff:fe02:65e%10]:123 Feb 13 15:18:20.251314 ntpd[1909]: Listen normally on 14 lxc9e1e7aea36ac [fe80::4ce7:63ff:fe85:4936%12]:123 Feb 13 15:18:24.401036 containerd[1944]: time="2025-02-13T15:18:24.400744588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:24.405145 containerd[1944]: time="2025-02-13T15:18:24.403746448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:24.405755 containerd[1944]: time="2025-02-13T15:18:24.405371344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:24.405755 containerd[1944]: time="2025-02-13T15:18:24.405565912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:24.467479 systemd[1]: run-containerd-runc-k8s.io-1cd6fd44749eefd5b7e930b118caa430228cfb3c9440c886f39dbc4307751be4-runc.n5GADZ.mount: Deactivated successfully. Feb 13 15:18:24.492973 systemd[1]: Started cri-containerd-1cd6fd44749eefd5b7e930b118caa430228cfb3c9440c886f39dbc4307751be4.scope - libcontainer container 1cd6fd44749eefd5b7e930b118caa430228cfb3c9440c886f39dbc4307751be4. Feb 13 15:18:24.540506 containerd[1944]: time="2025-02-13T15:18:24.540157468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:24.541876 containerd[1944]: time="2025-02-13T15:18:24.541351828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:24.541876 containerd[1944]: time="2025-02-13T15:18:24.541421704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:24.541876 containerd[1944]: time="2025-02-13T15:18:24.541603048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:24.603493 systemd[1]: Started cri-containerd-e29df7d245ceae87949dba8e73672691e12f270bf14bb2da0f99ca070c2a2664.scope - libcontainer container e29df7d245ceae87949dba8e73672691e12f270bf14bb2da0f99ca070c2a2664. Feb 13 15:18:24.671681 containerd[1944]: time="2025-02-13T15:18:24.670376021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jw5rc,Uid:22f5f4b4-eeac-4484-9252-73e50c315520,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cd6fd44749eefd5b7e930b118caa430228cfb3c9440c886f39dbc4307751be4\"" Feb 13 15:18:24.680730 containerd[1944]: time="2025-02-13T15:18:24.680524613Z" level=info msg="CreateContainer within sandbox \"1cd6fd44749eefd5b7e930b118caa430228cfb3c9440c886f39dbc4307751be4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:18:24.712496 containerd[1944]: time="2025-02-13T15:18:24.712436753Z" level=info msg="CreateContainer within sandbox \"1cd6fd44749eefd5b7e930b118caa430228cfb3c9440c886f39dbc4307751be4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e45b5b12e902cd36693c5fb949fd5da753cad79e0b64f81c6ace579144b76e56\"" Feb 13 15:18:24.715214 containerd[1944]: time="2025-02-13T15:18:24.713917349Z" level=info msg="StartContainer for \"e45b5b12e902cd36693c5fb949fd5da753cad79e0b64f81c6ace579144b76e56\"" Feb 13 15:18:24.792587 systemd[1]: Started cri-containerd-e45b5b12e902cd36693c5fb949fd5da753cad79e0b64f81c6ace579144b76e56.scope - libcontainer container e45b5b12e902cd36693c5fb949fd5da753cad79e0b64f81c6ace579144b76e56. Feb 13 15:18:24.799599 containerd[1944]: time="2025-02-13T15:18:24.799543542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-47nj4,Uid:6e66bcfb-14e0-43d9-9556-e1265bdcc44e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e29df7d245ceae87949dba8e73672691e12f270bf14bb2da0f99ca070c2a2664\"" Feb 13 15:18:24.815702 containerd[1944]: time="2025-02-13T15:18:24.815356782Z" level=info msg="CreateContainer within sandbox \"e29df7d245ceae87949dba8e73672691e12f270bf14bb2da0f99ca070c2a2664\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:18:24.850189 containerd[1944]: time="2025-02-13T15:18:24.850091238Z" level=info msg="CreateContainer within sandbox \"e29df7d245ceae87949dba8e73672691e12f270bf14bb2da0f99ca070c2a2664\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb834da1c7266e1c392a7b3644e0e60d0c198196051897d47a85531c55ae2513\"" Feb 13 15:18:24.852340 containerd[1944]: time="2025-02-13T15:18:24.852013542Z" level=info msg="StartContainer for \"bb834da1c7266e1c392a7b3644e0e60d0c198196051897d47a85531c55ae2513\"" Feb 13 15:18:24.915137 containerd[1944]: time="2025-02-13T15:18:24.914537778Z" level=info msg="StartContainer for \"e45b5b12e902cd36693c5fb949fd5da753cad79e0b64f81c6ace579144b76e56\" returns successfully" Feb 13 15:18:24.958434 systemd[1]: Started cri-containerd-bb834da1c7266e1c392a7b3644e0e60d0c198196051897d47a85531c55ae2513.scope - libcontainer container bb834da1c7266e1c392a7b3644e0e60d0c198196051897d47a85531c55ae2513. Feb 13 15:18:25.046782 containerd[1944]: time="2025-02-13T15:18:25.046710075Z" level=info msg="StartContainer for \"bb834da1c7266e1c392a7b3644e0e60d0c198196051897d47a85531c55ae2513\" returns successfully" Feb 13 15:18:25.253391 kubelet[3242]: I0213 15:18:25.250933 3242 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jw5rc" podStartSLOduration=30.250910992 podStartE2EDuration="30.250910992s" podCreationTimestamp="2025-02-13 15:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:25.24973624 +0000 UTC m=+44.529434046" watchObservedRunningTime="2025-02-13 15:18:25.250910992 +0000 UTC m=+44.530608750" Feb 13 15:18:25.297282 kubelet[3242]: I0213 15:18:25.296887 3242 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-47nj4" podStartSLOduration=30.296862832 podStartE2EDuration="30.296862832s" podCreationTimestamp="2025-02-13 15:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:25.290018596 +0000 UTC m=+44.569716390" watchObservedRunningTime="2025-02-13 15:18:25.296862832 +0000 UTC m=+44.576560626" Feb 13 15:18:26.926683 systemd[1]: Started sshd@7-172.31.28.87:22-139.178.68.195:36296.service - OpenSSH per-connection server daemon (139.178.68.195:36296). Feb 13 15:18:27.132499 sshd[4875]: Accepted publickey for core from 139.178.68.195 port 36296 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:27.135254 sshd-session[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:27.143739 systemd-logind[1917]: New session 8 of user core. Feb 13 15:18:27.150386 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:18:27.420914 sshd[4877]: Connection closed by 139.178.68.195 port 36296 Feb 13 15:18:27.421841 sshd-session[4875]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:27.428426 systemd[1]: sshd@7-172.31.28.87:22-139.178.68.195:36296.service: Deactivated successfully. Feb 13 15:18:27.433614 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:18:27.436052 systemd-logind[1917]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:18:27.438639 systemd-logind[1917]: Removed session 8. Feb 13 15:18:32.462623 systemd[1]: Started sshd@8-172.31.28.87:22-139.178.68.195:36298.service - OpenSSH per-connection server daemon (139.178.68.195:36298). Feb 13 15:18:32.658964 sshd[4891]: Accepted publickey for core from 139.178.68.195 port 36298 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:32.662101 sshd-session[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:32.670241 systemd-logind[1917]: New session 9 of user core. Feb 13 15:18:32.679386 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:18:32.936592 sshd[4893]: Connection closed by 139.178.68.195 port 36298 Feb 13 15:18:32.937797 sshd-session[4891]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:32.944136 systemd[1]: sshd@8-172.31.28.87:22-139.178.68.195:36298.service: Deactivated successfully. Feb 13 15:18:32.948817 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:18:32.950638 systemd-logind[1917]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:18:32.954150 systemd-logind[1917]: Removed session 9. Feb 13 15:18:37.980644 systemd[1]: Started sshd@9-172.31.28.87:22-139.178.68.195:52234.service - OpenSSH per-connection server daemon (139.178.68.195:52234). Feb 13 15:18:38.169718 sshd[4905]: Accepted publickey for core from 139.178.68.195 port 52234 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:38.172289 sshd-session[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:38.181064 systemd-logind[1917]: New session 10 of user core. Feb 13 15:18:38.194460 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:18:38.436312 sshd[4907]: Connection closed by 139.178.68.195 port 52234 Feb 13 15:18:38.437398 sshd-session[4905]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:38.444608 systemd[1]: sshd@9-172.31.28.87:22-139.178.68.195:52234.service: Deactivated successfully. Feb 13 15:18:38.448494 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:18:38.451519 systemd-logind[1917]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:18:38.453246 systemd-logind[1917]: Removed session 10. Feb 13 15:18:43.479667 systemd[1]: Started sshd@10-172.31.28.87:22-139.178.68.195:52250.service - OpenSSH per-connection server daemon (139.178.68.195:52250). Feb 13 15:18:43.672737 sshd[4922]: Accepted publickey for core from 139.178.68.195 port 52250 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:43.675340 sshd-session[4922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:43.683903 systemd-logind[1917]: New session 11 of user core. Feb 13 15:18:43.691437 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:18:43.935616 sshd[4924]: Connection closed by 139.178.68.195 port 52250 Feb 13 15:18:43.935407 sshd-session[4922]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:43.942288 systemd[1]: sshd@10-172.31.28.87:22-139.178.68.195:52250.service: Deactivated successfully. Feb 13 15:18:43.946640 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:18:43.948461 systemd-logind[1917]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:18:43.950626 systemd-logind[1917]: Removed session 11. Feb 13 15:18:48.976043 systemd[1]: Started sshd@11-172.31.28.87:22-139.178.68.195:49998.service - OpenSSH per-connection server daemon (139.178.68.195:49998). Feb 13 15:18:49.172659 sshd[4936]: Accepted publickey for core from 139.178.68.195 port 49998 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:49.175885 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:49.184501 systemd-logind[1917]: New session 12 of user core. Feb 13 15:18:49.191456 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:18:49.439260 sshd[4938]: Connection closed by 139.178.68.195 port 49998 Feb 13 15:18:49.440413 sshd-session[4936]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:49.446792 systemd[1]: sshd@11-172.31.28.87:22-139.178.68.195:49998.service: Deactivated successfully. Feb 13 15:18:49.452911 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:18:49.454816 systemd-logind[1917]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:18:49.456744 systemd-logind[1917]: Removed session 12. Feb 13 15:18:49.477638 systemd[1]: Started sshd@12-172.31.28.87:22-139.178.68.195:50008.service - OpenSSH per-connection server daemon (139.178.68.195:50008). Feb 13 15:18:49.670312 sshd[4950]: Accepted publickey for core from 139.178.68.195 port 50008 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:49.672823 sshd-session[4950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:49.680180 systemd-logind[1917]: New session 13 of user core. Feb 13 15:18:49.692436 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:18:50.019860 sshd[4952]: Connection closed by 139.178.68.195 port 50008 Feb 13 15:18:50.023996 sshd-session[4950]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:50.032014 systemd[1]: sshd@12-172.31.28.87:22-139.178.68.195:50008.service: Deactivated successfully. Feb 13 15:18:50.038074 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:18:50.048829 systemd-logind[1917]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:18:50.073396 systemd[1]: Started sshd@13-172.31.28.87:22-139.178.68.195:50016.service - OpenSSH per-connection server daemon (139.178.68.195:50016). Feb 13 15:18:50.075223 systemd-logind[1917]: Removed session 13. Feb 13 15:18:50.271086 sshd[4961]: Accepted publickey for core from 139.178.68.195 port 50016 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:50.273740 sshd-session[4961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:50.282771 systemd-logind[1917]: New session 14 of user core. Feb 13 15:18:50.287368 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:18:50.554452 sshd[4963]: Connection closed by 139.178.68.195 port 50016 Feb 13 15:18:50.554256 sshd-session[4961]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:50.562991 systemd-logind[1917]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:18:50.564672 systemd[1]: sshd@13-172.31.28.87:22-139.178.68.195:50016.service: Deactivated successfully. Feb 13 15:18:50.572338 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:18:50.578217 systemd-logind[1917]: Removed session 14. Feb 13 15:18:55.594641 systemd[1]: Started sshd@14-172.31.28.87:22-139.178.68.195:50028.service - OpenSSH per-connection server daemon (139.178.68.195:50028). Feb 13 15:18:55.787642 sshd[4977]: Accepted publickey for core from 139.178.68.195 port 50028 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:55.790731 sshd-session[4977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:55.802233 systemd-logind[1917]: New session 15 of user core. Feb 13 15:18:55.806372 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:18:56.050973 sshd[4979]: Connection closed by 139.178.68.195 port 50028 Feb 13 15:18:56.050277 sshd-session[4977]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:56.057931 systemd[1]: sshd@14-172.31.28.87:22-139.178.68.195:50028.service: Deactivated successfully. Feb 13 15:18:56.062308 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:18:56.064607 systemd-logind[1917]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:18:56.067149 systemd-logind[1917]: Removed session 15. Feb 13 15:19:01.089659 systemd[1]: Started sshd@15-172.31.28.87:22-139.178.68.195:39842.service - OpenSSH per-connection server daemon (139.178.68.195:39842). Feb 13 15:19:01.285942 sshd[4992]: Accepted publickey for core from 139.178.68.195 port 39842 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:01.288428 sshd-session[4992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:01.296213 systemd-logind[1917]: New session 16 of user core. Feb 13 15:19:01.302403 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:19:01.550774 sshd[4994]: Connection closed by 139.178.68.195 port 39842 Feb 13 15:19:01.551738 sshd-session[4992]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:01.558319 systemd[1]: sshd@15-172.31.28.87:22-139.178.68.195:39842.service: Deactivated successfully. Feb 13 15:19:01.564211 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:19:01.566778 systemd-logind[1917]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:19:01.571266 systemd-logind[1917]: Removed session 16. Feb 13 15:19:06.593858 systemd[1]: Started sshd@16-172.31.28.87:22-139.178.68.195:34210.service - OpenSSH per-connection server daemon (139.178.68.195:34210). Feb 13 15:19:06.789506 sshd[5007]: Accepted publickey for core from 139.178.68.195 port 34210 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:06.792150 sshd-session[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:06.800884 systemd-logind[1917]: New session 17 of user core. Feb 13 15:19:06.808393 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:19:07.058834 sshd[5009]: Connection closed by 139.178.68.195 port 34210 Feb 13 15:19:07.060477 sshd-session[5007]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:07.066639 systemd[1]: sshd@16-172.31.28.87:22-139.178.68.195:34210.service: Deactivated successfully. Feb 13 15:19:07.071231 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:19:07.073984 systemd-logind[1917]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:19:07.076476 systemd-logind[1917]: Removed session 17. Feb 13 15:19:07.098637 systemd[1]: Started sshd@17-172.31.28.87:22-139.178.68.195:34212.service - OpenSSH per-connection server daemon (139.178.68.195:34212). Feb 13 15:19:07.283417 sshd[5019]: Accepted publickey for core from 139.178.68.195 port 34212 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:07.286032 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:07.293500 systemd-logind[1917]: New session 18 of user core. Feb 13 15:19:07.304392 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:19:07.600131 sshd[5021]: Connection closed by 139.178.68.195 port 34212 Feb 13 15:19:07.601076 sshd-session[5019]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:07.606348 systemd[1]: sshd@17-172.31.28.87:22-139.178.68.195:34212.service: Deactivated successfully. Feb 13 15:19:07.610538 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:19:07.614337 systemd-logind[1917]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:19:07.616493 systemd-logind[1917]: Removed session 18. Feb 13 15:19:07.642613 systemd[1]: Started sshd@18-172.31.28.87:22-139.178.68.195:34216.service - OpenSSH per-connection server daemon (139.178.68.195:34216). Feb 13 15:19:07.835715 sshd[5030]: Accepted publickey for core from 139.178.68.195 port 34216 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:07.838312 sshd-session[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:07.847453 systemd-logind[1917]: New session 19 of user core. Feb 13 15:19:07.854397 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:19:10.410565 sshd[5032]: Connection closed by 139.178.68.195 port 34216 Feb 13 15:19:10.411693 sshd-session[5030]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:10.418580 systemd[1]: sshd@18-172.31.28.87:22-139.178.68.195:34216.service: Deactivated successfully. Feb 13 15:19:10.425826 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:19:10.433947 systemd-logind[1917]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:19:10.457660 systemd[1]: Started sshd@19-172.31.28.87:22-139.178.68.195:34222.service - OpenSSH per-connection server daemon (139.178.68.195:34222). Feb 13 15:19:10.460211 systemd-logind[1917]: Removed session 19. Feb 13 15:19:10.651162 sshd[5048]: Accepted publickey for core from 139.178.68.195 port 34222 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:10.653593 sshd-session[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:10.662141 systemd-logind[1917]: New session 20 of user core. Feb 13 15:19:10.667412 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:19:11.167594 sshd[5050]: Connection closed by 139.178.68.195 port 34222 Feb 13 15:19:11.168648 sshd-session[5048]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:11.175332 systemd[1]: sshd@19-172.31.28.87:22-139.178.68.195:34222.service: Deactivated successfully. Feb 13 15:19:11.180687 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:19:11.183088 systemd-logind[1917]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:19:11.184954 systemd-logind[1917]: Removed session 20. Feb 13 15:19:11.205668 systemd[1]: Started sshd@20-172.31.28.87:22-139.178.68.195:34226.service - OpenSSH per-connection server daemon (139.178.68.195:34226). Feb 13 15:19:11.405165 sshd[5059]: Accepted publickey for core from 139.178.68.195 port 34226 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:11.407696 sshd-session[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:11.416952 systemd-logind[1917]: New session 21 of user core. Feb 13 15:19:11.426400 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:19:11.684087 sshd[5061]: Connection closed by 139.178.68.195 port 34226 Feb 13 15:19:11.683907 sshd-session[5059]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:11.691073 systemd[1]: sshd@20-172.31.28.87:22-139.178.68.195:34226.service: Deactivated successfully. Feb 13 15:19:11.695802 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:19:11.698515 systemd-logind[1917]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:19:11.700735 systemd-logind[1917]: Removed session 21. Feb 13 15:19:16.727610 systemd[1]: Started sshd@21-172.31.28.87:22-139.178.68.195:34482.service - OpenSSH per-connection server daemon (139.178.68.195:34482). Feb 13 15:19:16.923352 sshd[5072]: Accepted publickey for core from 139.178.68.195 port 34482 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:16.925945 sshd-session[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:16.934592 systemd-logind[1917]: New session 22 of user core. Feb 13 15:19:16.944405 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:19:17.190272 sshd[5074]: Connection closed by 139.178.68.195 port 34482 Feb 13 15:19:17.192766 sshd-session[5072]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:17.199879 systemd[1]: sshd@21-172.31.28.87:22-139.178.68.195:34482.service: Deactivated successfully. Feb 13 15:19:17.208973 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:19:17.211431 systemd-logind[1917]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:19:17.216215 systemd-logind[1917]: Removed session 22. Feb 13 15:19:22.230663 systemd[1]: Started sshd@22-172.31.28.87:22-139.178.68.195:34494.service - OpenSSH per-connection server daemon (139.178.68.195:34494). Feb 13 15:19:22.434306 sshd[5088]: Accepted publickey for core from 139.178.68.195 port 34494 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:22.436875 sshd-session[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:22.445883 systemd-logind[1917]: New session 23 of user core. Feb 13 15:19:22.450406 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:19:22.691407 sshd[5090]: Connection closed by 139.178.68.195 port 34494 Feb 13 15:19:22.691201 sshd-session[5088]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:22.698401 systemd[1]: sshd@22-172.31.28.87:22-139.178.68.195:34494.service: Deactivated successfully. Feb 13 15:19:22.701906 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:19:22.703694 systemd-logind[1917]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:19:22.705950 systemd-logind[1917]: Removed session 23. Feb 13 15:19:27.731709 systemd[1]: Started sshd@23-172.31.28.87:22-139.178.68.195:42128.service - OpenSSH per-connection server daemon (139.178.68.195:42128). Feb 13 15:19:27.930745 sshd[5104]: Accepted publickey for core from 139.178.68.195 port 42128 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:27.933355 sshd-session[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:27.940704 systemd-logind[1917]: New session 24 of user core. Feb 13 15:19:27.952384 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:19:28.040886 update_engine[1918]: I20250213 15:19:28.040304 1918 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 15:19:28.040886 update_engine[1918]: I20250213 15:19:28.040372 1918 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 15:19:28.040886 update_engine[1918]: I20250213 15:19:28.040648 1918 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 15:19:28.043096 update_engine[1918]: I20250213 15:19:28.042845 1918 omaha_request_params.cc:62] Current group set to stable Feb 13 15:19:28.043096 update_engine[1918]: I20250213 15:19:28.043006 1918 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 15:19:28.043350 update_engine[1918]: I20250213 15:19:28.043030 1918 update_attempter.cc:643] Scheduling an action processor start. Feb 13 15:19:28.043446 locksmithd[1960]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 15:19:28.044898 update_engine[1918]: I20250213 15:19:28.043846 1918 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 15:19:28.044898 update_engine[1918]: I20250213 15:19:28.043937 1918 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 15:19:28.044898 update_engine[1918]: I20250213 15:19:28.044054 1918 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 15:19:28.044898 update_engine[1918]: I20250213 15:19:28.044075 1918 omaha_request_action.cc:272] Request: Feb 13 15:19:28.044898 update_engine[1918]: Feb 13 15:19:28.044898 update_engine[1918]: Feb 13 15:19:28.044898 update_engine[1918]: Feb 13 15:19:28.044898 update_engine[1918]: Feb 13 15:19:28.044898 update_engine[1918]: Feb 13 15:19:28.044898 update_engine[1918]: Feb 13 15:19:28.044898 update_engine[1918]: Feb 13 15:19:28.044898 update_engine[1918]: Feb 13 15:19:28.044898 update_engine[1918]: I20250213 15:19:28.044091 1918 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:19:28.046747 update_engine[1918]: I20250213 15:19:28.046334 1918 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:19:28.046885 update_engine[1918]: I20250213 15:19:28.046840 1918 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:19:28.078769 update_engine[1918]: E20250213 15:19:28.078537 1918 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:19:28.078769 update_engine[1918]: I20250213 15:19:28.078671 1918 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 15:19:28.193818 sshd[5106]: Connection closed by 139.178.68.195 port 42128 Feb 13 15:19:28.194947 sshd-session[5104]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:28.200449 systemd[1]: sshd@23-172.31.28.87:22-139.178.68.195:42128.service: Deactivated successfully. Feb 13 15:19:28.204412 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:19:28.209188 systemd-logind[1917]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:19:28.211493 systemd-logind[1917]: Removed session 24. Feb 13 15:19:33.234626 systemd[1]: Started sshd@24-172.31.28.87:22-139.178.68.195:42132.service - OpenSSH per-connection server daemon (139.178.68.195:42132). Feb 13 15:19:33.432542 sshd[5116]: Accepted publickey for core from 139.178.68.195 port 42132 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:33.435012 sshd-session[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:33.442249 systemd-logind[1917]: New session 25 of user core. Feb 13 15:19:33.451381 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:19:33.693176 sshd[5118]: Connection closed by 139.178.68.195 port 42132 Feb 13 15:19:33.694461 sshd-session[5116]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:33.701005 systemd[1]: sshd@24-172.31.28.87:22-139.178.68.195:42132.service: Deactivated successfully. Feb 13 15:19:33.704900 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:19:33.706605 systemd-logind[1917]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:19:33.709291 systemd-logind[1917]: Removed session 25. Feb 13 15:19:33.729633 systemd[1]: Started sshd@25-172.31.28.87:22-139.178.68.195:42144.service - OpenSSH per-connection server daemon (139.178.68.195:42144). Feb 13 15:19:33.919816 sshd[5129]: Accepted publickey for core from 139.178.68.195 port 42144 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:33.922672 sshd-session[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:33.930020 systemd-logind[1917]: New session 26 of user core. Feb 13 15:19:33.937362 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:19:36.544151 containerd[1944]: time="2025-02-13T15:19:36.544035782Z" level=info msg="StopContainer for \"3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548\" with timeout 30 (s)" Feb 13 15:19:36.548194 containerd[1944]: time="2025-02-13T15:19:36.546675902Z" level=info msg="Stop container \"3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548\" with signal terminated" Feb 13 15:19:36.583271 systemd[1]: cri-containerd-3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548.scope: Deactivated successfully. Feb 13 15:19:36.599021 containerd[1944]: time="2025-02-13T15:19:36.598941230Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:19:36.616515 containerd[1944]: time="2025-02-13T15:19:36.616465742Z" level=info msg="StopContainer for \"50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27\" with timeout 2 (s)" Feb 13 15:19:36.617307 containerd[1944]: time="2025-02-13T15:19:36.617257958Z" level=info msg="Stop container \"50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27\" with signal terminated" Feb 13 15:19:36.632369 systemd-networkd[1849]: lxc_health: Link DOWN Feb 13 15:19:36.632413 systemd-networkd[1849]: lxc_health: Lost carrier Feb 13 15:19:36.662484 systemd[1]: cri-containerd-50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27.scope: Deactivated successfully. Feb 13 15:19:36.663078 systemd[1]: cri-containerd-50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27.scope: Consumed 14.257s CPU time. Feb 13 15:19:36.673497 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548-rootfs.mount: Deactivated successfully. Feb 13 15:19:36.692497 containerd[1944]: time="2025-02-13T15:19:36.692062731Z" level=info msg="shim disconnected" id=3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548 namespace=k8s.io Feb 13 15:19:36.692497 containerd[1944]: time="2025-02-13T15:19:36.692219979Z" level=warning msg="cleaning up after shim disconnected" id=3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548 namespace=k8s.io Feb 13 15:19:36.692497 containerd[1944]: time="2025-02-13T15:19:36.692242983Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:19:36.719710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27-rootfs.mount: Deactivated successfully. Feb 13 15:19:36.733884 containerd[1944]: time="2025-02-13T15:19:36.733663419Z" level=info msg="StopContainer for \"3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548\" returns successfully" Feb 13 15:19:36.735235 containerd[1944]: time="2025-02-13T15:19:36.735175911Z" level=info msg="StopPodSandbox for \"afab6a2da11710e73a71172ac99bdf5b5b0f998c4f9fca1a765fb7d7b8d9928a\"" Feb 13 15:19:36.735394 containerd[1944]: time="2025-02-13T15:19:36.735243651Z" level=info msg="Container to stop \"3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:19:36.735791 containerd[1944]: time="2025-02-13T15:19:36.735568407Z" level=info msg="shim disconnected" id=50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27 namespace=k8s.io Feb 13 15:19:36.735791 containerd[1944]: time="2025-02-13T15:19:36.735633279Z" level=warning msg="cleaning up after shim disconnected" id=50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27 namespace=k8s.io Feb 13 15:19:36.735791 containerd[1944]: time="2025-02-13T15:19:36.735656751Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:19:36.741307 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-afab6a2da11710e73a71172ac99bdf5b5b0f998c4f9fca1a765fb7d7b8d9928a-shm.mount: Deactivated successfully. Feb 13 15:19:36.758033 systemd[1]: cri-containerd-afab6a2da11710e73a71172ac99bdf5b5b0f998c4f9fca1a765fb7d7b8d9928a.scope: Deactivated successfully. Feb 13 15:19:36.779736 containerd[1944]: time="2025-02-13T15:19:36.779667555Z" level=info msg="StopContainer for \"50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27\" returns successfully" Feb 13 15:19:36.781129 containerd[1944]: time="2025-02-13T15:19:36.780824463Z" level=info msg="StopPodSandbox for \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\"" Feb 13 15:19:36.781129 containerd[1944]: time="2025-02-13T15:19:36.781008651Z" level=info msg="Container to stop \"c2b1af9fb6ed9b31c49b75ca79bcdfd5e22f97405e46b1456f8d5fc3d19834d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:19:36.781775 containerd[1944]: time="2025-02-13T15:19:36.781039779Z" level=info msg="Container to stop \"80b88aa1c67283b25d66db15e363e5311b4977a995932009099ec28e2649b81d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:19:36.781775 containerd[1944]: time="2025-02-13T15:19:36.781292067Z" level=info msg="Container to stop \"7c3e85c1ab6ca226007cdc447b4bbdd99469aec95676bc7baa7689d2c5033e60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:19:36.781775 containerd[1944]: time="2025-02-13T15:19:36.781315359Z" level=info msg="Container to stop \"4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:19:36.781775 containerd[1944]: time="2025-02-13T15:19:36.781336203Z" level=info msg="Container to stop \"50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:19:36.802549 systemd[1]: cri-containerd-35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c.scope: Deactivated successfully. Feb 13 15:19:36.831141 containerd[1944]: time="2025-02-13T15:19:36.830758863Z" level=info msg="shim disconnected" id=afab6a2da11710e73a71172ac99bdf5b5b0f998c4f9fca1a765fb7d7b8d9928a namespace=k8s.io Feb 13 15:19:36.831141 containerd[1944]: time="2025-02-13T15:19:36.830830935Z" level=warning msg="cleaning up after shim disconnected" id=afab6a2da11710e73a71172ac99bdf5b5b0f998c4f9fca1a765fb7d7b8d9928a namespace=k8s.io Feb 13 15:19:36.831141 containerd[1944]: time="2025-02-13T15:19:36.830850135Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:19:36.866878 containerd[1944]: time="2025-02-13T15:19:36.866499832Z" level=info msg="shim disconnected" id=35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c namespace=k8s.io Feb 13 15:19:36.866878 containerd[1944]: time="2025-02-13T15:19:36.866591284Z" level=warning msg="cleaning up after shim disconnected" id=35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c namespace=k8s.io Feb 13 15:19:36.866878 containerd[1944]: time="2025-02-13T15:19:36.866612620Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:19:36.872904 containerd[1944]: time="2025-02-13T15:19:36.872164720Z" level=info msg="TearDown network for sandbox \"afab6a2da11710e73a71172ac99bdf5b5b0f998c4f9fca1a765fb7d7b8d9928a\" successfully" Feb 13 15:19:36.872904 containerd[1944]: time="2025-02-13T15:19:36.872213380Z" level=info msg="StopPodSandbox for \"afab6a2da11710e73a71172ac99bdf5b5b0f998c4f9fca1a765fb7d7b8d9928a\" returns successfully" Feb 13 15:19:36.898457 containerd[1944]: time="2025-02-13T15:19:36.898358704Z" level=info msg="TearDown network for sandbox \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\" successfully" Feb 13 15:19:36.898457 containerd[1944]: time="2025-02-13T15:19:36.898443304Z" level=info msg="StopPodSandbox for \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\" returns successfully" Feb 13 15:19:36.999705 kubelet[3242]: I0213 15:19:36.999373 3242 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-cni-path\") pod \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " Feb 13 15:19:37.000890 kubelet[3242]: I0213 15:19:37.000193 3242 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-bpf-maps\") pod \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " Feb 13 15:19:37.000890 kubelet[3242]: I0213 15:19:36.999570 3242 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-cni-path" (OuterVolumeSpecName: "cni-path") pod "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" (UID: "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:19:37.000890 kubelet[3242]: I0213 15:19:37.000358 3242 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" (UID: "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:19:37.001480 kubelet[3242]: I0213 15:19:37.001076 3242 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmb5z\" (UniqueName: \"kubernetes.io/projected/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-kube-api-access-jmb5z\") pod \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " Feb 13 15:19:37.001480 kubelet[3242]: I0213 15:19:37.001161 3242 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-xtables-lock\") pod \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " Feb 13 15:19:37.001480 kubelet[3242]: I0213 15:19:37.001200 3242 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-cilium-run\") pod \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " Feb 13 15:19:37.001480 kubelet[3242]: I0213 15:19:37.001236 3242 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-host-proc-sys-kernel\") pod \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " Feb 13 15:19:37.001480 kubelet[3242]: I0213 15:19:37.001271 3242 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-etc-cni-netd\") pod \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " Feb 13 15:19:37.001480 kubelet[3242]: I0213 15:19:37.001376 3242 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-clustermesh-secrets\") pod \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " Feb 13 15:19:37.002096 kubelet[3242]: I0213 15:19:37.001499 3242 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" (UID: "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:19:37.002096 kubelet[3242]: I0213 15:19:37.001544 3242 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e40acc5-4248-4d92-a53e-8a1e597d356d-cilium-config-path\") pod \"9e40acc5-4248-4d92-a53e-8a1e597d356d\" (UID: \"9e40acc5-4248-4d92-a53e-8a1e597d356d\") " Feb 13 15:19:37.002096 kubelet[3242]: I0213 15:19:37.001549 3242 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" (UID: "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:19:37.002096 kubelet[3242]: I0213 15:19:37.001583 3242 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-lib-modules\") pod \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " Feb 13 15:19:37.002096 kubelet[3242]: I0213 15:19:37.001589 3242 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" (UID: "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:19:37.002414 kubelet[3242]: I0213 15:19:37.001816 3242 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-cilium-config-path\") pod \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " Feb 13 15:19:37.005069 kubelet[3242]: I0213 15:19:37.002617 3242 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-hostproc\") pod \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " Feb 13 15:19:37.005069 kubelet[3242]: I0213 15:19:37.002745 3242 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-cilium-cgroup\") pod \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " Feb 13 15:19:37.005069 kubelet[3242]: I0213 15:19:37.002866 3242 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d258x\" (UniqueName: \"kubernetes.io/projected/9e40acc5-4248-4d92-a53e-8a1e597d356d-kube-api-access-d258x\") pod \"9e40acc5-4248-4d92-a53e-8a1e597d356d\" (UID: \"9e40acc5-4248-4d92-a53e-8a1e597d356d\") " Feb 13 15:19:37.005069 kubelet[3242]: I0213 15:19:37.002907 3242 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-host-proc-sys-net\") pod \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " Feb 13 15:19:37.005778 kubelet[3242]: I0213 15:19:37.005518 3242 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-hubble-tls\") pod \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\" (UID: \"98afc6d9-6fdd-4efe-9960-70f04a9b2ea8\") " Feb 13 15:19:37.005778 kubelet[3242]: I0213 15:19:37.005667 3242 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-host-proc-sys-kernel\") on node \"ip-172-31-28-87\" DevicePath \"\"" Feb 13 15:19:37.005778 kubelet[3242]: I0213 15:19:37.005721 3242 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-cni-path\") on node \"ip-172-31-28-87\" DevicePath \"\"" Feb 13 15:19:37.005778 kubelet[3242]: I0213 15:19:37.005744 3242 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-bpf-maps\") on node \"ip-172-31-28-87\" DevicePath \"\"" Feb 13 15:19:37.006240 kubelet[3242]: I0213 15:19:37.006042 3242 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-xtables-lock\") on node \"ip-172-31-28-87\" DevicePath \"\"" Feb 13 15:19:37.006240 kubelet[3242]: I0213 15:19:37.006075 3242 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-cilium-run\") on node \"ip-172-31-28-87\" DevicePath \"\"" Feb 13 15:19:37.007160 kubelet[3242]: I0213 15:19:37.006710 3242 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" (UID: "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:19:37.009133 kubelet[3242]: I0213 15:19:37.008544 3242 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" (UID: "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:19:37.014914 kubelet[3242]: I0213 15:19:37.014837 3242 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-kube-api-access-jmb5z" (OuterVolumeSpecName: "kube-api-access-jmb5z") pod "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" (UID: "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8"). InnerVolumeSpecName "kube-api-access-jmb5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:19:37.015065 kubelet[3242]: I0213 15:19:37.014947 3242 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" (UID: "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:19:37.015065 kubelet[3242]: I0213 15:19:37.014989 3242 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-hostproc" (OuterVolumeSpecName: "hostproc") pod "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" (UID: "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:19:37.015635 kubelet[3242]: I0213 15:19:37.015571 3242 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" (UID: "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:19:37.019465 kubelet[3242]: I0213 15:19:37.019285 3242 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" (UID: "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:19:37.021237 kubelet[3242]: I0213 15:19:37.020631 3242 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e40acc5-4248-4d92-a53e-8a1e597d356d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9e40acc5-4248-4d92-a53e-8a1e597d356d" (UID: "9e40acc5-4248-4d92-a53e-8a1e597d356d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:19:37.022717 kubelet[3242]: I0213 15:19:37.022665 3242 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e40acc5-4248-4d92-a53e-8a1e597d356d-kube-api-access-d258x" (OuterVolumeSpecName: "kube-api-access-d258x") pod "9e40acc5-4248-4d92-a53e-8a1e597d356d" (UID: "9e40acc5-4248-4d92-a53e-8a1e597d356d"). InnerVolumeSpecName "kube-api-access-d258x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:19:37.024493 kubelet[3242]: I0213 15:19:37.024404 3242 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" (UID: "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:19:37.025022 kubelet[3242]: I0213 15:19:37.024969 3242 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" (UID: "98afc6d9-6fdd-4efe-9960-70f04a9b2ea8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:19:37.106806 kubelet[3242]: I0213 15:19:37.106679 3242 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-clustermesh-secrets\") on node \"ip-172-31-28-87\" DevicePath \"\"" Feb 13 15:19:37.106999 kubelet[3242]: I0213 15:19:37.106976 3242 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e40acc5-4248-4d92-a53e-8a1e597d356d-cilium-config-path\") on node \"ip-172-31-28-87\" DevicePath \"\"" Feb 13 15:19:37.107162 kubelet[3242]: I0213 15:19:37.107138 3242 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-lib-modules\") on node \"ip-172-31-28-87\" DevicePath \"\"" Feb 13 15:19:37.107447 kubelet[3242]: I0213 15:19:37.107259 3242 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-cilium-config-path\") on node \"ip-172-31-28-87\" DevicePath \"\"" Feb 13 15:19:37.107447 kubelet[3242]: I0213 15:19:37.107291 3242 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-hostproc\") on node \"ip-172-31-28-87\" DevicePath \"\"" Feb 13 15:19:37.107447 kubelet[3242]: I0213 15:19:37.107311 3242 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-cilium-cgroup\") on node \"ip-172-31-28-87\" DevicePath \"\"" Feb 13 15:19:37.107447 kubelet[3242]: I0213 15:19:37.107331 3242 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-d258x\" (UniqueName: \"kubernetes.io/projected/9e40acc5-4248-4d92-a53e-8a1e597d356d-kube-api-access-d258x\") on node \"ip-172-31-28-87\" DevicePath \"\"" Feb 13 15:19:37.107447 kubelet[3242]: I0213 15:19:37.107351 3242 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-host-proc-sys-net\") on node \"ip-172-31-28-87\" DevicePath \"\"" Feb 13 15:19:37.107447 kubelet[3242]: I0213 15:19:37.107372 3242 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-hubble-tls\") on node \"ip-172-31-28-87\" DevicePath \"\"" Feb 13 15:19:37.107447 kubelet[3242]: I0213 15:19:37.107394 3242 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jmb5z\" (UniqueName: \"kubernetes.io/projected/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-kube-api-access-jmb5z\") on node \"ip-172-31-28-87\" DevicePath \"\"" Feb 13 15:19:37.107447 kubelet[3242]: I0213 15:19:37.107416 3242 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8-etc-cni-netd\") on node \"ip-172-31-28-87\" DevicePath \"\"" Feb 13 15:19:37.405207 kubelet[3242]: I0213 15:19:37.405081 3242 scope.go:117] "RemoveContainer" containerID="3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548" Feb 13 15:19:37.407783 containerd[1944]: time="2025-02-13T15:19:37.407703002Z" level=info msg="RemoveContainer for \"3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548\"" Feb 13 15:19:37.423182 systemd[1]: Removed slice kubepods-besteffort-pod9e40acc5_4248_4d92_a53e_8a1e597d356d.slice - libcontainer container kubepods-besteffort-pod9e40acc5_4248_4d92_a53e_8a1e597d356d.slice. Feb 13 15:19:37.428228 containerd[1944]: time="2025-02-13T15:19:37.427240154Z" level=info msg="RemoveContainer for \"3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548\" returns successfully" Feb 13 15:19:37.429845 kubelet[3242]: I0213 15:19:37.429800 3242 scope.go:117] "RemoveContainer" containerID="3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548" Feb 13 15:19:37.430674 containerd[1944]: time="2025-02-13T15:19:37.430503698Z" level=error msg="ContainerStatus for \"3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548\": not found" Feb 13 15:19:37.430864 kubelet[3242]: E0213 15:19:37.430788 3242 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548\": not found" containerID="3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548" Feb 13 15:19:37.431012 kubelet[3242]: I0213 15:19:37.430841 3242 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548"} err="failed to get container status \"3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548\": rpc error: code = NotFound desc = an error occurred when try to find container \"3a7939dd76e2a4977bb4bcb727f58d714cd764b1c84631a6b2d7f2529508e548\": not found" Feb 13 15:19:37.431012 kubelet[3242]: I0213 15:19:37.430994 3242 scope.go:117] "RemoveContainer" containerID="50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27" Feb 13 15:19:37.433702 systemd[1]: Removed slice kubepods-burstable-pod98afc6d9_6fdd_4efe_9960_70f04a9b2ea8.slice - libcontainer container kubepods-burstable-pod98afc6d9_6fdd_4efe_9960_70f04a9b2ea8.slice. Feb 13 15:19:37.433941 systemd[1]: kubepods-burstable-pod98afc6d9_6fdd_4efe_9960_70f04a9b2ea8.slice: Consumed 14.417s CPU time. Feb 13 15:19:37.435771 containerd[1944]: time="2025-02-13T15:19:37.435276026Z" level=info msg="RemoveContainer for \"50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27\"" Feb 13 15:19:37.444587 containerd[1944]: time="2025-02-13T15:19:37.444522686Z" level=info msg="RemoveContainer for \"50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27\" returns successfully" Feb 13 15:19:37.445276 kubelet[3242]: I0213 15:19:37.445072 3242 scope.go:117] "RemoveContainer" containerID="4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36" Feb 13 15:19:37.451180 containerd[1944]: time="2025-02-13T15:19:37.450241359Z" level=info msg="RemoveContainer for \"4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36\"" Feb 13 15:19:37.458855 containerd[1944]: time="2025-02-13T15:19:37.458800923Z" level=info msg="RemoveContainer for \"4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36\" returns successfully" Feb 13 15:19:37.459719 kubelet[3242]: I0213 15:19:37.459666 3242 scope.go:117] "RemoveContainer" containerID="80b88aa1c67283b25d66db15e363e5311b4977a995932009099ec28e2649b81d" Feb 13 15:19:37.468400 containerd[1944]: time="2025-02-13T15:19:37.468198507Z" level=info msg="RemoveContainer for \"80b88aa1c67283b25d66db15e363e5311b4977a995932009099ec28e2649b81d\"" Feb 13 15:19:37.490926 containerd[1944]: time="2025-02-13T15:19:37.490777587Z" level=info msg="RemoveContainer for \"80b88aa1c67283b25d66db15e363e5311b4977a995932009099ec28e2649b81d\" returns successfully" Feb 13 15:19:37.492063 kubelet[3242]: I0213 15:19:37.491242 3242 scope.go:117] "RemoveContainer" containerID="7c3e85c1ab6ca226007cdc447b4bbdd99469aec95676bc7baa7689d2c5033e60" Feb 13 15:19:37.493104 containerd[1944]: time="2025-02-13T15:19:37.493057803Z" level=info msg="RemoveContainer for \"7c3e85c1ab6ca226007cdc447b4bbdd99469aec95676bc7baa7689d2c5033e60\"" Feb 13 15:19:37.499084 containerd[1944]: time="2025-02-13T15:19:37.499035111Z" level=info msg="RemoveContainer for \"7c3e85c1ab6ca226007cdc447b4bbdd99469aec95676bc7baa7689d2c5033e60\" returns successfully" Feb 13 15:19:37.499573 kubelet[3242]: I0213 15:19:37.499542 3242 scope.go:117] "RemoveContainer" containerID="c2b1af9fb6ed9b31c49b75ca79bcdfd5e22f97405e46b1456f8d5fc3d19834d4" Feb 13 15:19:37.501425 containerd[1944]: time="2025-02-13T15:19:37.501377751Z" level=info msg="RemoveContainer for \"c2b1af9fb6ed9b31c49b75ca79bcdfd5e22f97405e46b1456f8d5fc3d19834d4\"" Feb 13 15:19:37.507685 containerd[1944]: time="2025-02-13T15:19:37.507563691Z" level=info msg="RemoveContainer for \"c2b1af9fb6ed9b31c49b75ca79bcdfd5e22f97405e46b1456f8d5fc3d19834d4\" returns successfully" Feb 13 15:19:37.507915 kubelet[3242]: I0213 15:19:37.507883 3242 scope.go:117] "RemoveContainer" containerID="50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27" Feb 13 15:19:37.508374 containerd[1944]: time="2025-02-13T15:19:37.508239675Z" level=error msg="ContainerStatus for \"50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27\": not found" Feb 13 15:19:37.508517 kubelet[3242]: E0213 15:19:37.508481 3242 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27\": not found" containerID="50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27" Feb 13 15:19:37.508579 kubelet[3242]: I0213 15:19:37.508547 3242 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27"} err="failed to get container status \"50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27\": rpc error: code = NotFound desc = an error occurred when try to find container \"50aedce2fde70ea1aa36ca51b2fd0404a1343a127d9b92f910910a2763fd6c27\": not found" Feb 13 15:19:37.508662 kubelet[3242]: I0213 15:19:37.508590 3242 scope.go:117] "RemoveContainer" containerID="4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36" Feb 13 15:19:37.509076 containerd[1944]: time="2025-02-13T15:19:37.509030391Z" level=error msg="ContainerStatus for \"4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36\": not found" Feb 13 15:19:37.509525 kubelet[3242]: E0213 15:19:37.509484 3242 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36\": not found" containerID="4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36" Feb 13 15:19:37.509603 kubelet[3242]: I0213 15:19:37.509534 3242 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36"} err="failed to get container status \"4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a380938d82d76df135c18ea81c196bd931d888be054ee8a6fd89ee0b90d8b36\": not found" Feb 13 15:19:37.509603 kubelet[3242]: I0213 15:19:37.509568 3242 scope.go:117] "RemoveContainer" containerID="80b88aa1c67283b25d66db15e363e5311b4977a995932009099ec28e2649b81d" Feb 13 15:19:37.510038 containerd[1944]: time="2025-02-13T15:19:37.509923707Z" level=error msg="ContainerStatus for \"80b88aa1c67283b25d66db15e363e5311b4977a995932009099ec28e2649b81d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80b88aa1c67283b25d66db15e363e5311b4977a995932009099ec28e2649b81d\": not found" Feb 13 15:19:37.510405 kubelet[3242]: E0213 15:19:37.510363 3242 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80b88aa1c67283b25d66db15e363e5311b4977a995932009099ec28e2649b81d\": not found" containerID="80b88aa1c67283b25d66db15e363e5311b4977a995932009099ec28e2649b81d" Feb 13 15:19:37.510492 kubelet[3242]: I0213 15:19:37.510438 3242 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80b88aa1c67283b25d66db15e363e5311b4977a995932009099ec28e2649b81d"} err="failed to get container status \"80b88aa1c67283b25d66db15e363e5311b4977a995932009099ec28e2649b81d\": rpc error: code = NotFound desc = an error occurred when try to find container \"80b88aa1c67283b25d66db15e363e5311b4977a995932009099ec28e2649b81d\": not found" Feb 13 15:19:37.510492 kubelet[3242]: I0213 15:19:37.510480 3242 scope.go:117] "RemoveContainer" containerID="7c3e85c1ab6ca226007cdc447b4bbdd99469aec95676bc7baa7689d2c5033e60" Feb 13 15:19:37.510883 containerd[1944]: time="2025-02-13T15:19:37.510833211Z" level=error msg="ContainerStatus for \"7c3e85c1ab6ca226007cdc447b4bbdd99469aec95676bc7baa7689d2c5033e60\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c3e85c1ab6ca226007cdc447b4bbdd99469aec95676bc7baa7689d2c5033e60\": not found" Feb 13 15:19:37.511547 kubelet[3242]: E0213 15:19:37.511466 3242 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c3e85c1ab6ca226007cdc447b4bbdd99469aec95676bc7baa7689d2c5033e60\": not found" containerID="7c3e85c1ab6ca226007cdc447b4bbdd99469aec95676bc7baa7689d2c5033e60" Feb 13 15:19:37.511547 kubelet[3242]: I0213 15:19:37.511521 3242 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c3e85c1ab6ca226007cdc447b4bbdd99469aec95676bc7baa7689d2c5033e60"} err="failed to get container status \"7c3e85c1ab6ca226007cdc447b4bbdd99469aec95676bc7baa7689d2c5033e60\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c3e85c1ab6ca226007cdc447b4bbdd99469aec95676bc7baa7689d2c5033e60\": not found" Feb 13 15:19:37.512179 kubelet[3242]: I0213 15:19:37.511558 3242 scope.go:117] "RemoveContainer" containerID="c2b1af9fb6ed9b31c49b75ca79bcdfd5e22f97405e46b1456f8d5fc3d19834d4" Feb 13 15:19:37.512258 containerd[1944]: time="2025-02-13T15:19:37.511901115Z" level=error msg="ContainerStatus for \"c2b1af9fb6ed9b31c49b75ca79bcdfd5e22f97405e46b1456f8d5fc3d19834d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2b1af9fb6ed9b31c49b75ca79bcdfd5e22f97405e46b1456f8d5fc3d19834d4\": not found" Feb 13 15:19:37.512468 kubelet[3242]: E0213 15:19:37.512339 3242 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2b1af9fb6ed9b31c49b75ca79bcdfd5e22f97405e46b1456f8d5fc3d19834d4\": not found" containerID="c2b1af9fb6ed9b31c49b75ca79bcdfd5e22f97405e46b1456f8d5fc3d19834d4" Feb 13 15:19:37.512468 kubelet[3242]: I0213 15:19:37.512441 3242 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2b1af9fb6ed9b31c49b75ca79bcdfd5e22f97405e46b1456f8d5fc3d19834d4"} err="failed to get container status \"c2b1af9fb6ed9b31c49b75ca79bcdfd5e22f97405e46b1456f8d5fc3d19834d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2b1af9fb6ed9b31c49b75ca79bcdfd5e22f97405e46b1456f8d5fc3d19834d4\": not found" Feb 13 15:19:37.553931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afab6a2da11710e73a71172ac99bdf5b5b0f998c4f9fca1a765fb7d7b8d9928a-rootfs.mount: Deactivated successfully. Feb 13 15:19:37.555032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c-rootfs.mount: Deactivated successfully. Feb 13 15:19:37.555282 systemd[1]: var-lib-kubelet-pods-9e40acc5\x2d4248\x2d4d92\x2da53e\x2d8a1e597d356d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd258x.mount: Deactivated successfully. Feb 13 15:19:37.555427 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c-shm.mount: Deactivated successfully. Feb 13 15:19:37.555568 systemd[1]: var-lib-kubelet-pods-98afc6d9\x2d6fdd\x2d4efe\x2d9960\x2d70f04a9b2ea8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djmb5z.mount: Deactivated successfully. Feb 13 15:19:37.555698 systemd[1]: var-lib-kubelet-pods-98afc6d9\x2d6fdd\x2d4efe\x2d9960\x2d70f04a9b2ea8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:19:37.555832 systemd[1]: var-lib-kubelet-pods-98afc6d9\x2d6fdd\x2d4efe\x2d9960\x2d70f04a9b2ea8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:19:38.041231 update_engine[1918]: I20250213 15:19:38.041146 1918 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:19:38.041764 update_engine[1918]: I20250213 15:19:38.041502 1918 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:19:38.041906 update_engine[1918]: I20250213 15:19:38.041846 1918 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:19:38.042408 update_engine[1918]: E20250213 15:19:38.042351 1918 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:19:38.042500 update_engine[1918]: I20250213 15:19:38.042441 1918 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 15:19:38.476298 sshd[5131]: Connection closed by 139.178.68.195 port 42144 Feb 13 15:19:38.477569 sshd-session[5129]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:38.485211 systemd[1]: sshd@25-172.31.28.87:22-139.178.68.195:42144.service: Deactivated successfully. Feb 13 15:19:38.490345 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:19:38.490853 systemd[1]: session-26.scope: Consumed 1.854s CPU time. Feb 13 15:19:38.492347 systemd-logind[1917]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:19:38.494227 systemd-logind[1917]: Removed session 26. Feb 13 15:19:38.519611 systemd[1]: Started sshd@26-172.31.28.87:22-139.178.68.195:41038.service - OpenSSH per-connection server daemon (139.178.68.195:41038). Feb 13 15:19:38.706837 sshd[5293]: Accepted publickey for core from 139.178.68.195 port 41038 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:38.709406 sshd-session[5293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:38.718340 systemd-logind[1917]: New session 27 of user core. Feb 13 15:19:38.724379 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:19:38.922566 kubelet[3242]: I0213 15:19:38.922290 3242 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" path="/var/lib/kubelet/pods/98afc6d9-6fdd-4efe-9960-70f04a9b2ea8/volumes" Feb 13 15:19:38.925269 kubelet[3242]: I0213 15:19:38.925196 3242 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e40acc5-4248-4d92-a53e-8a1e597d356d" path="/var/lib/kubelet/pods/9e40acc5-4248-4d92-a53e-8a1e597d356d/volumes" Feb 13 15:19:39.249884 ntpd[1909]: Deleting interface #12 lxc_health, fe80::fce9:85ff:fe1a:5798%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Feb 13 15:19:39.250534 ntpd[1909]: 13 Feb 15:19:39 ntpd[1909]: Deleting interface #12 lxc_health, fe80::fce9:85ff:fe1a:5798%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Feb 13 15:19:40.509397 sshd[5295]: Connection closed by 139.178.68.195 port 41038 Feb 13 15:19:40.510352 sshd-session[5293]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:40.519820 systemd[1]: sshd@26-172.31.28.87:22-139.178.68.195:41038.service: Deactivated successfully. Feb 13 15:19:40.526715 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:19:40.530354 systemd[1]: session-27.scope: Consumed 1.579s CPU time. Feb 13 15:19:40.537443 systemd-logind[1917]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:19:40.561344 systemd[1]: Started sshd@27-172.31.28.87:22-139.178.68.195:41054.service - OpenSSH per-connection server daemon (139.178.68.195:41054). Feb 13 15:19:40.563769 systemd-logind[1917]: Removed session 27. Feb 13 15:19:40.645457 kubelet[3242]: I0213 15:19:40.644543 3242 topology_manager.go:215] "Topology Admit Handler" podUID="106ef485-2bb6-41ff-bf35-7d73eaef1077" podNamespace="kube-system" podName="cilium-bvmwv" Feb 13 15:19:40.647208 kubelet[3242]: E0213 15:19:40.646190 3242 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" containerName="mount-cgroup" Feb 13 15:19:40.647208 kubelet[3242]: E0213 15:19:40.646274 3242 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" containerName="mount-bpf-fs" Feb 13 15:19:40.647208 kubelet[3242]: E0213 15:19:40.646292 3242 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" containerName="clean-cilium-state" Feb 13 15:19:40.647208 kubelet[3242]: E0213 15:19:40.646307 3242 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" containerName="cilium-agent" Feb 13 15:19:40.647208 kubelet[3242]: E0213 15:19:40.646347 3242 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" containerName="apply-sysctl-overwrites" Feb 13 15:19:40.647208 kubelet[3242]: E0213 15:19:40.646366 3242 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9e40acc5-4248-4d92-a53e-8a1e597d356d" containerName="cilium-operator" Feb 13 15:19:40.647208 kubelet[3242]: I0213 15:19:40.646438 3242 memory_manager.go:354] "RemoveStaleState removing state" podUID="98afc6d9-6fdd-4efe-9960-70f04a9b2ea8" containerName="cilium-agent" Feb 13 15:19:40.647208 kubelet[3242]: I0213 15:19:40.646457 3242 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e40acc5-4248-4d92-a53e-8a1e597d356d" containerName="cilium-operator" Feb 13 15:19:40.663975 systemd[1]: Created slice kubepods-burstable-pod106ef485_2bb6_41ff_bf35_7d73eaef1077.slice - libcontainer container kubepods-burstable-pod106ef485_2bb6_41ff_bf35_7d73eaef1077.slice. Feb 13 15:19:40.732717 kubelet[3242]: I0213 15:19:40.732665 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/106ef485-2bb6-41ff-bf35-7d73eaef1077-cilium-ipsec-secrets\") pod \"cilium-bvmwv\" (UID: \"106ef485-2bb6-41ff-bf35-7d73eaef1077\") " pod="kube-system/cilium-bvmwv" Feb 13 15:19:40.733147 kubelet[3242]: I0213 15:19:40.733039 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/106ef485-2bb6-41ff-bf35-7d73eaef1077-cni-path\") pod \"cilium-bvmwv\" (UID: \"106ef485-2bb6-41ff-bf35-7d73eaef1077\") " pod="kube-system/cilium-bvmwv" Feb 13 15:19:40.733325 kubelet[3242]: I0213 15:19:40.733276 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zfxg\" (UniqueName: \"kubernetes.io/projected/106ef485-2bb6-41ff-bf35-7d73eaef1077-kube-api-access-6zfxg\") pod \"cilium-bvmwv\" (UID: \"106ef485-2bb6-41ff-bf35-7d73eaef1077\") " pod="kube-system/cilium-bvmwv" Feb 13 15:19:40.733565 kubelet[3242]: I0213 15:19:40.733539 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/106ef485-2bb6-41ff-bf35-7d73eaef1077-bpf-maps\") pod \"cilium-bvmwv\" (UID: \"106ef485-2bb6-41ff-bf35-7d73eaef1077\") " pod="kube-system/cilium-bvmwv" Feb 13 15:19:40.733785 kubelet[3242]: I0213 15:19:40.733746 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/106ef485-2bb6-41ff-bf35-7d73eaef1077-lib-modules\") pod \"cilium-bvmwv\" (UID: \"106ef485-2bb6-41ff-bf35-7d73eaef1077\") " pod="kube-system/cilium-bvmwv" Feb 13 15:19:40.734022 kubelet[3242]: I0213 15:19:40.733998 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/106ef485-2bb6-41ff-bf35-7d73eaef1077-xtables-lock\") pod \"cilium-bvmwv\" (UID: \"106ef485-2bb6-41ff-bf35-7d73eaef1077\") " pod="kube-system/cilium-bvmwv" Feb 13 15:19:40.734217 kubelet[3242]: I0213 15:19:40.734180 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/106ef485-2bb6-41ff-bf35-7d73eaef1077-cilium-config-path\") pod \"cilium-bvmwv\" (UID: \"106ef485-2bb6-41ff-bf35-7d73eaef1077\") " pod="kube-system/cilium-bvmwv" Feb 13 15:19:40.734439 kubelet[3242]: I0213 15:19:40.734415 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/106ef485-2bb6-41ff-bf35-7d73eaef1077-cilium-cgroup\") pod \"cilium-bvmwv\" (UID: \"106ef485-2bb6-41ff-bf35-7d73eaef1077\") " pod="kube-system/cilium-bvmwv" Feb 13 15:19:40.734582 kubelet[3242]: I0213 15:19:40.734557 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/106ef485-2bb6-41ff-bf35-7d73eaef1077-host-proc-sys-kernel\") pod \"cilium-bvmwv\" (UID: \"106ef485-2bb6-41ff-bf35-7d73eaef1077\") " pod="kube-system/cilium-bvmwv" Feb 13 15:19:40.734784 kubelet[3242]: I0213 15:19:40.734759 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/106ef485-2bb6-41ff-bf35-7d73eaef1077-hubble-tls\") pod \"cilium-bvmwv\" (UID: \"106ef485-2bb6-41ff-bf35-7d73eaef1077\") " pod="kube-system/cilium-bvmwv" Feb 13 15:19:40.734982 kubelet[3242]: I0213 15:19:40.734959 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/106ef485-2bb6-41ff-bf35-7d73eaef1077-cilium-run\") pod \"cilium-bvmwv\" (UID: \"106ef485-2bb6-41ff-bf35-7d73eaef1077\") " pod="kube-system/cilium-bvmwv" Feb 13 15:19:40.735171 kubelet[3242]: I0213 15:19:40.735148 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/106ef485-2bb6-41ff-bf35-7d73eaef1077-host-proc-sys-net\") pod \"cilium-bvmwv\" (UID: \"106ef485-2bb6-41ff-bf35-7d73eaef1077\") " pod="kube-system/cilium-bvmwv" Feb 13 15:19:40.735366 kubelet[3242]: I0213 15:19:40.735325 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/106ef485-2bb6-41ff-bf35-7d73eaef1077-hostproc\") pod \"cilium-bvmwv\" (UID: \"106ef485-2bb6-41ff-bf35-7d73eaef1077\") " pod="kube-system/cilium-bvmwv" Feb 13 15:19:40.735540 kubelet[3242]: I0213 15:19:40.735518 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/106ef485-2bb6-41ff-bf35-7d73eaef1077-etc-cni-netd\") pod \"cilium-bvmwv\" (UID: \"106ef485-2bb6-41ff-bf35-7d73eaef1077\") " pod="kube-system/cilium-bvmwv" Feb 13 15:19:40.735805 kubelet[3242]: I0213 15:19:40.735702 3242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/106ef485-2bb6-41ff-bf35-7d73eaef1077-clustermesh-secrets\") pod \"cilium-bvmwv\" (UID: \"106ef485-2bb6-41ff-bf35-7d73eaef1077\") " pod="kube-system/cilium-bvmwv" Feb 13 15:19:40.776434 sshd[5305]: Accepted publickey for core from 139.178.68.195 port 41054 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:40.777807 sshd-session[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:40.787215 systemd-logind[1917]: New session 28 of user core. Feb 13 15:19:40.796422 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:19:40.932335 containerd[1944]: time="2025-02-13T15:19:40.932005052Z" level=info msg="StopPodSandbox for \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\"" Feb 13 15:19:40.932335 containerd[1944]: time="2025-02-13T15:19:40.932230196Z" level=info msg="TearDown network for sandbox \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\" successfully" Feb 13 15:19:40.932335 containerd[1944]: time="2025-02-13T15:19:40.932257028Z" level=info msg="StopPodSandbox for \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\" returns successfully" Feb 13 15:19:40.939249 sshd[5307]: Connection closed by 139.178.68.195 port 41054 Feb 13 15:19:40.939753 containerd[1944]: time="2025-02-13T15:19:40.935240528Z" level=info msg="RemovePodSandbox for \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\"" Feb 13 15:19:40.939753 containerd[1944]: time="2025-02-13T15:19:40.935292440Z" level=info msg="Forcibly stopping sandbox \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\"" Feb 13 15:19:40.939753 containerd[1944]: time="2025-02-13T15:19:40.935395352Z" level=info msg="TearDown network for sandbox \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\" successfully" Feb 13 15:19:40.934493 sshd-session[5305]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:40.952489 systemd[1]: sshd@27-172.31.28.87:22-139.178.68.195:41054.service: Deactivated successfully. Feb 13 15:19:40.953641 containerd[1944]: time="2025-02-13T15:19:40.952807016Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:19:40.953641 containerd[1944]: time="2025-02-13T15:19:40.952891652Z" level=info msg="RemovePodSandbox \"35c0addc4c1d2a52bb52f14fd8902d38bcff3eb3fb8b21d355704c1baa52516c\" returns successfully" Feb 13 15:19:40.959094 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:19:40.961867 containerd[1944]: time="2025-02-13T15:19:40.961420352Z" level=info msg="StopPodSandbox for \"afab6a2da11710e73a71172ac99bdf5b5b0f998c4f9fca1a765fb7d7b8d9928a\"" Feb 13 15:19:40.961867 containerd[1944]: time="2025-02-13T15:19:40.961574192Z" level=info msg="TearDown network for sandbox \"afab6a2da11710e73a71172ac99bdf5b5b0f998c4f9fca1a765fb7d7b8d9928a\" successfully" Feb 13 15:19:40.961867 containerd[1944]: time="2025-02-13T15:19:40.961598936Z" level=info msg="StopPodSandbox for \"afab6a2da11710e73a71172ac99bdf5b5b0f998c4f9fca1a765fb7d7b8d9928a\" returns successfully" Feb 13 15:19:40.963686 containerd[1944]: time="2025-02-13T15:19:40.963638744Z" level=info msg="RemovePodSandbox for \"afab6a2da11710e73a71172ac99bdf5b5b0f998c4f9fca1a765fb7d7b8d9928a\"" Feb 13 15:19:40.964158 containerd[1944]: time="2025-02-13T15:19:40.963910112Z" level=info msg="Forcibly stopping sandbox \"afab6a2da11710e73a71172ac99bdf5b5b0f998c4f9fca1a765fb7d7b8d9928a\"" Feb 13 15:19:40.964472 containerd[1944]: time="2025-02-13T15:19:40.964100636Z" level=info msg="TearDown network for sandbox \"afab6a2da11710e73a71172ac99bdf5b5b0f998c4f9fca1a765fb7d7b8d9928a\" successfully" Feb 13 15:19:40.969233 systemd-logind[1917]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:19:40.977541 containerd[1944]: time="2025-02-13T15:19:40.977073080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bvmwv,Uid:106ef485-2bb6-41ff-bf35-7d73eaef1077,Namespace:kube-system,Attempt:0,}" Feb 13 15:19:40.981287 containerd[1944]: time="2025-02-13T15:19:40.979000916Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"afab6a2da11710e73a71172ac99bdf5b5b0f998c4f9fca1a765fb7d7b8d9928a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:19:40.981753 containerd[1944]: time="2025-02-13T15:19:40.979094240Z" level=info msg="RemovePodSandbox \"afab6a2da11710e73a71172ac99bdf5b5b0f998c4f9fca1a765fb7d7b8d9928a\" returns successfully" Feb 13 15:19:40.997645 systemd[1]: Started sshd@28-172.31.28.87:22-139.178.68.195:41058.service - OpenSSH per-connection server daemon (139.178.68.195:41058). Feb 13 15:19:41.004260 systemd-logind[1917]: Removed session 28. Feb 13 15:19:41.072912 containerd[1944]: time="2025-02-13T15:19:41.071002733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:19:41.072912 containerd[1944]: time="2025-02-13T15:19:41.071090681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:19:41.072912 containerd[1944]: time="2025-02-13T15:19:41.071152349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:19:41.073400 containerd[1944]: time="2025-02-13T15:19:41.073184105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:19:41.121378 systemd[1]: Started cri-containerd-59742a7b8f43bb873fc9769465e55202ac9a03bb62f649c43d3842555f439f67.scope - libcontainer container 59742a7b8f43bb873fc9769465e55202ac9a03bb62f649c43d3842555f439f67. Feb 13 15:19:41.170614 containerd[1944]: time="2025-02-13T15:19:41.170471093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bvmwv,Uid:106ef485-2bb6-41ff-bf35-7d73eaef1077,Namespace:kube-system,Attempt:0,} returns sandbox id \"59742a7b8f43bb873fc9769465e55202ac9a03bb62f649c43d3842555f439f67\"" Feb 13 15:19:41.179416 containerd[1944]: time="2025-02-13T15:19:41.178885001Z" level=info msg="CreateContainer within sandbox \"59742a7b8f43bb873fc9769465e55202ac9a03bb62f649c43d3842555f439f67\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:19:41.204390 containerd[1944]: time="2025-02-13T15:19:41.204332969Z" level=info msg="CreateContainer within sandbox \"59742a7b8f43bb873fc9769465e55202ac9a03bb62f649c43d3842555f439f67\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4a101eb5d12e619e7df7617be618f045bf108af0109c372aec1159b2c0fa82db\"" Feb 13 15:19:41.207047 containerd[1944]: time="2025-02-13T15:19:41.205588109Z" level=info msg="StartContainer for \"4a101eb5d12e619e7df7617be618f045bf108af0109c372aec1159b2c0fa82db\"" Feb 13 15:19:41.229161 kubelet[3242]: E0213 15:19:41.228998 3242 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:19:41.233852 sshd[5319]: Accepted publickey for core from 139.178.68.195 port 41058 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:41.237403 sshd-session[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:41.246859 systemd-logind[1917]: New session 29 of user core. Feb 13 15:19:41.254592 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 15:19:41.267452 systemd[1]: Started cri-containerd-4a101eb5d12e619e7df7617be618f045bf108af0109c372aec1159b2c0fa82db.scope - libcontainer container 4a101eb5d12e619e7df7617be618f045bf108af0109c372aec1159b2c0fa82db. Feb 13 15:19:41.315919 containerd[1944]: time="2025-02-13T15:19:41.315855894Z" level=info msg="StartContainer for \"4a101eb5d12e619e7df7617be618f045bf108af0109c372aec1159b2c0fa82db\" returns successfully" Feb 13 15:19:41.330872 systemd[1]: cri-containerd-4a101eb5d12e619e7df7617be618f045bf108af0109c372aec1159b2c0fa82db.scope: Deactivated successfully. Feb 13 15:19:41.410557 containerd[1944]: time="2025-02-13T15:19:41.410341338Z" level=info msg="shim disconnected" id=4a101eb5d12e619e7df7617be618f045bf108af0109c372aec1159b2c0fa82db namespace=k8s.io Feb 13 15:19:41.410557 containerd[1944]: time="2025-02-13T15:19:41.410498418Z" level=warning msg="cleaning up after shim disconnected" id=4a101eb5d12e619e7df7617be618f045bf108af0109c372aec1159b2c0fa82db namespace=k8s.io Feb 13 15:19:41.410557 containerd[1944]: time="2025-02-13T15:19:41.410519430Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:19:42.453219 containerd[1944]: time="2025-02-13T15:19:42.452877967Z" level=info msg="CreateContainer within sandbox \"59742a7b8f43bb873fc9769465e55202ac9a03bb62f649c43d3842555f439f67\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:19:42.486288 containerd[1944]: time="2025-02-13T15:19:42.486082244Z" level=info msg="CreateContainer within sandbox \"59742a7b8f43bb873fc9769465e55202ac9a03bb62f649c43d3842555f439f67\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"379342510916047ab67c2fb59cd21cfb10ec15a6fe7c5d67cf40663a53002db8\"" Feb 13 15:19:42.489536 containerd[1944]: time="2025-02-13T15:19:42.487780520Z" level=info msg="StartContainer for \"379342510916047ab67c2fb59cd21cfb10ec15a6fe7c5d67cf40663a53002db8\"" Feb 13 15:19:42.547582 systemd[1]: Started cri-containerd-379342510916047ab67c2fb59cd21cfb10ec15a6fe7c5d67cf40663a53002db8.scope - libcontainer container 379342510916047ab67c2fb59cd21cfb10ec15a6fe7c5d67cf40663a53002db8. Feb 13 15:19:42.610777 containerd[1944]: time="2025-02-13T15:19:42.610591004Z" level=info msg="StartContainer for \"379342510916047ab67c2fb59cd21cfb10ec15a6fe7c5d67cf40663a53002db8\" returns successfully" Feb 13 15:19:42.621850 systemd[1]: cri-containerd-379342510916047ab67c2fb59cd21cfb10ec15a6fe7c5d67cf40663a53002db8.scope: Deactivated successfully. Feb 13 15:19:42.671706 containerd[1944]: time="2025-02-13T15:19:42.671624300Z" level=info msg="shim disconnected" id=379342510916047ab67c2fb59cd21cfb10ec15a6fe7c5d67cf40663a53002db8 namespace=k8s.io Feb 13 15:19:42.671706 containerd[1944]: time="2025-02-13T15:19:42.671701592Z" level=warning msg="cleaning up after shim disconnected" id=379342510916047ab67c2fb59cd21cfb10ec15a6fe7c5d67cf40663a53002db8 namespace=k8s.io Feb 13 15:19:42.672039 containerd[1944]: time="2025-02-13T15:19:42.671722652Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:19:42.847664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-379342510916047ab67c2fb59cd21cfb10ec15a6fe7c5d67cf40663a53002db8-rootfs.mount: Deactivated successfully. Feb 13 15:19:43.459038 containerd[1944]: time="2025-02-13T15:19:43.458870504Z" level=info msg="CreateContainer within sandbox \"59742a7b8f43bb873fc9769465e55202ac9a03bb62f649c43d3842555f439f67\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:19:43.503571 containerd[1944]: time="2025-02-13T15:19:43.503336409Z" level=info msg="CreateContainer within sandbox \"59742a7b8f43bb873fc9769465e55202ac9a03bb62f649c43d3842555f439f67\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b6f3fbbdd10e16ed6bf69a576c6e429c3aef69713e516caa6f3234fa5958f384\"" Feb 13 15:19:43.505894 containerd[1944]: time="2025-02-13T15:19:43.505774341Z" level=info msg="StartContainer for \"b6f3fbbdd10e16ed6bf69a576c6e429c3aef69713e516caa6f3234fa5958f384\"" Feb 13 15:19:43.575736 systemd[1]: Started cri-containerd-b6f3fbbdd10e16ed6bf69a576c6e429c3aef69713e516caa6f3234fa5958f384.scope - libcontainer container b6f3fbbdd10e16ed6bf69a576c6e429c3aef69713e516caa6f3234fa5958f384. Feb 13 15:19:43.696079 systemd[1]: cri-containerd-b6f3fbbdd10e16ed6bf69a576c6e429c3aef69713e516caa6f3234fa5958f384.scope: Deactivated successfully. Feb 13 15:19:43.696801 containerd[1944]: time="2025-02-13T15:19:43.696424306Z" level=info msg="StartContainer for \"b6f3fbbdd10e16ed6bf69a576c6e429c3aef69713e516caa6f3234fa5958f384\" returns successfully" Feb 13 15:19:43.746040 containerd[1944]: time="2025-02-13T15:19:43.745819222Z" level=info msg="shim disconnected" id=b6f3fbbdd10e16ed6bf69a576c6e429c3aef69713e516caa6f3234fa5958f384 namespace=k8s.io Feb 13 15:19:43.746040 containerd[1944]: time="2025-02-13T15:19:43.745933846Z" level=warning msg="cleaning up after shim disconnected" id=b6f3fbbdd10e16ed6bf69a576c6e429c3aef69713e516caa6f3234fa5958f384 namespace=k8s.io Feb 13 15:19:43.746040 containerd[1944]: time="2025-02-13T15:19:43.745978366Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:19:43.766984 containerd[1944]: time="2025-02-13T15:19:43.766902610Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:19:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:19:43.845866 kubelet[3242]: I0213 15:19:43.845645 3242 setters.go:580] "Node became not ready" node="ip-172-31-28-87" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:19:43Z","lastTransitionTime":"2025-02-13T15:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:19:43.848805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6f3fbbdd10e16ed6bf69a576c6e429c3aef69713e516caa6f3234fa5958f384-rootfs.mount: Deactivated successfully. Feb 13 15:19:44.470278 containerd[1944]: time="2025-02-13T15:19:44.469638357Z" level=info msg="CreateContainer within sandbox \"59742a7b8f43bb873fc9769465e55202ac9a03bb62f649c43d3842555f439f67\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:19:44.507629 containerd[1944]: time="2025-02-13T15:19:44.506142922Z" level=info msg="CreateContainer within sandbox \"59742a7b8f43bb873fc9769465e55202ac9a03bb62f649c43d3842555f439f67\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3edbd133f6c01993af7427adf7f3d2d2f2e49cf03cae3f1d4484a5819a3a6e9c\"" Feb 13 15:19:44.507834 containerd[1944]: time="2025-02-13T15:19:44.507721030Z" level=info msg="StartContainer for \"3edbd133f6c01993af7427adf7f3d2d2f2e49cf03cae3f1d4484a5819a3a6e9c\"" Feb 13 15:19:44.568600 systemd[1]: Started cri-containerd-3edbd133f6c01993af7427adf7f3d2d2f2e49cf03cae3f1d4484a5819a3a6e9c.scope - libcontainer container 3edbd133f6c01993af7427adf7f3d2d2f2e49cf03cae3f1d4484a5819a3a6e9c. Feb 13 15:19:44.614386 systemd[1]: cri-containerd-3edbd133f6c01993af7427adf7f3d2d2f2e49cf03cae3f1d4484a5819a3a6e9c.scope: Deactivated successfully. Feb 13 15:19:44.619417 containerd[1944]: time="2025-02-13T15:19:44.619283158Z" level=info msg="StartContainer for \"3edbd133f6c01993af7427adf7f3d2d2f2e49cf03cae3f1d4484a5819a3a6e9c\" returns successfully" Feb 13 15:19:44.660326 containerd[1944]: time="2025-02-13T15:19:44.660238762Z" level=info msg="shim disconnected" id=3edbd133f6c01993af7427adf7f3d2d2f2e49cf03cae3f1d4484a5819a3a6e9c namespace=k8s.io Feb 13 15:19:44.660326 containerd[1944]: time="2025-02-13T15:19:44.660316294Z" level=warning msg="cleaning up after shim disconnected" id=3edbd133f6c01993af7427adf7f3d2d2f2e49cf03cae3f1d4484a5819a3a6e9c namespace=k8s.io Feb 13 15:19:44.660326 containerd[1944]: time="2025-02-13T15:19:44.660338122Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:19:44.848641 systemd[1]: run-containerd-runc-k8s.io-3edbd133f6c01993af7427adf7f3d2d2f2e49cf03cae3f1d4484a5819a3a6e9c-runc.11yy5z.mount: Deactivated successfully. Feb 13 15:19:44.849587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3edbd133f6c01993af7427adf7f3d2d2f2e49cf03cae3f1d4484a5819a3a6e9c-rootfs.mount: Deactivated successfully. Feb 13 15:19:45.475204 containerd[1944]: time="2025-02-13T15:19:45.474870022Z" level=info msg="CreateContainer within sandbox \"59742a7b8f43bb873fc9769465e55202ac9a03bb62f649c43d3842555f439f67\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:19:45.513955 containerd[1944]: time="2025-02-13T15:19:45.513882347Z" level=info msg="CreateContainer within sandbox \"59742a7b8f43bb873fc9769465e55202ac9a03bb62f649c43d3842555f439f67\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0e09e83df9886d822589a74bb1c8068a6e4601ac9a4c843223a0c26f95952e8c\"" Feb 13 15:19:45.514892 containerd[1944]: time="2025-02-13T15:19:45.514594151Z" level=info msg="StartContainer for \"0e09e83df9886d822589a74bb1c8068a6e4601ac9a4c843223a0c26f95952e8c\"" Feb 13 15:19:45.570460 systemd[1]: Started cri-containerd-0e09e83df9886d822589a74bb1c8068a6e4601ac9a4c843223a0c26f95952e8c.scope - libcontainer container 0e09e83df9886d822589a74bb1c8068a6e4601ac9a4c843223a0c26f95952e8c. Feb 13 15:19:45.629636 containerd[1944]: time="2025-02-13T15:19:45.629450495Z" level=info msg="StartContainer for \"0e09e83df9886d822589a74bb1c8068a6e4601ac9a4c843223a0c26f95952e8c\" returns successfully" Feb 13 15:19:46.417218 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 15:19:47.895867 systemd[1]: run-containerd-runc-k8s.io-0e09e83df9886d822589a74bb1c8068a6e4601ac9a4c843223a0c26f95952e8c-runc.fjBjtT.mount: Deactivated successfully. Feb 13 15:19:48.043507 update_engine[1918]: I20250213 15:19:48.043213 1918 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:19:48.044537 update_engine[1918]: I20250213 15:19:48.044476 1918 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:19:48.044917 update_engine[1918]: I20250213 15:19:48.044857 1918 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:19:48.045415 update_engine[1918]: E20250213 15:19:48.045369 1918 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:19:48.045479 update_engine[1918]: I20250213 15:19:48.045456 1918 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 15:19:50.566675 systemd-networkd[1849]: lxc_health: Link UP Feb 13 15:19:50.575015 (udev-worker)[6159]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:19:50.578792 systemd-networkd[1849]: lxc_health: Gained carrier Feb 13 15:19:51.013153 kubelet[3242]: I0213 15:19:51.012105 3242 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bvmwv" podStartSLOduration=11.012083366 podStartE2EDuration="11.012083366s" podCreationTimestamp="2025-02-13 15:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:19:46.508634628 +0000 UTC m=+125.788332494" watchObservedRunningTime="2025-02-13 15:19:51.012083366 +0000 UTC m=+130.291781160" Feb 13 15:19:52.617300 systemd-networkd[1849]: lxc_health: Gained IPv6LL Feb 13 15:19:55.249945 ntpd[1909]: Listen normally on 15 lxc_health [fe80::246f:a4ff:fe46:1779%14]:123 Feb 13 15:19:55.250552 ntpd[1909]: 13 Feb 15:19:55 ntpd[1909]: Listen normally on 15 lxc_health [fe80::246f:a4ff:fe46:1779%14]:123 Feb 13 15:19:57.118230 kubelet[3242]: E0213 15:19:57.117622 3242 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:37166->127.0.0.1:43707: write tcp 172.31.28.87:10250->172.31.28.87:51098: write: broken pipe Feb 13 15:19:57.119378 kubelet[3242]: E0213 15:19:57.118892 3242 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37166->127.0.0.1:43707: write tcp 127.0.0.1:37166->127.0.0.1:43707: write: broken pipe Feb 13 15:19:57.145505 sshd[5379]: Connection closed by 139.178.68.195 port 41058 Feb 13 15:19:57.145848 sshd-session[5319]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:57.154686 systemd[1]: sshd@28-172.31.28.87:22-139.178.68.195:41058.service: Deactivated successfully. Feb 13 15:19:57.163195 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 15:19:57.165213 systemd-logind[1917]: Session 29 logged out. Waiting for processes to exit. Feb 13 15:19:57.168629 systemd-logind[1917]: Removed session 29. Feb 13 15:19:58.041988 update_engine[1918]: I20250213 15:19:58.041270 1918 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:19:58.041988 update_engine[1918]: I20250213 15:19:58.041631 1918 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:19:58.043054 update_engine[1918]: I20250213 15:19:58.042786 1918 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:19:58.043940 update_engine[1918]: E20250213 15:19:58.043437 1918 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:19:58.043940 update_engine[1918]: I20250213 15:19:58.043530 1918 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 15:19:58.043940 update_engine[1918]: I20250213 15:19:58.043550 1918 omaha_request_action.cc:617] Omaha request response: Feb 13 15:19:58.043940 update_engine[1918]: E20250213 15:19:58.043684 1918 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 15:19:58.043940 update_engine[1918]: I20250213 15:19:58.043721 1918 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 15:19:58.043940 update_engine[1918]: I20250213 15:19:58.043738 1918 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 15:19:58.043940 update_engine[1918]: I20250213 15:19:58.043753 1918 update_attempter.cc:306] Processing Done. Feb 13 15:19:58.043940 update_engine[1918]: E20250213 15:19:58.043779 1918 update_attempter.cc:619] Update failed. Feb 13 15:19:58.043940 update_engine[1918]: I20250213 15:19:58.043795 1918 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 15:19:58.043940 update_engine[1918]: I20250213 15:19:58.043812 1918 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 15:19:58.043940 update_engine[1918]: I20250213 15:19:58.043828 1918 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 15:19:58.046277 update_engine[1918]: I20250213 15:19:58.045176 1918 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 15:19:58.046277 update_engine[1918]: I20250213 15:19:58.045254 1918 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 15:19:58.046277 update_engine[1918]: I20250213 15:19:58.045272 1918 omaha_request_action.cc:272] Request: Feb 13 15:19:58.046277 update_engine[1918]: Feb 13 15:19:58.046277 update_engine[1918]: Feb 13 15:19:58.046277 update_engine[1918]: Feb 13 15:19:58.046277 update_engine[1918]: Feb 13 15:19:58.046277 update_engine[1918]: Feb 13 15:19:58.046277 update_engine[1918]: Feb 13 15:19:58.046277 update_engine[1918]: I20250213 15:19:58.045289 1918 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:19:58.046277 update_engine[1918]: I20250213 15:19:58.045583 1918 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:19:58.046277 update_engine[1918]: I20250213 15:19:58.046006 1918 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:19:58.047663 update_engine[1918]: E20250213 15:19:58.047083 1918 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:19:58.047663 update_engine[1918]: I20250213 15:19:58.047217 1918 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 15:19:58.047663 update_engine[1918]: I20250213 15:19:58.047240 1918 omaha_request_action.cc:617] Omaha request response: Feb 13 15:19:58.047663 update_engine[1918]: I20250213 15:19:58.047259 1918 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 15:19:58.047663 update_engine[1918]: I20250213 15:19:58.047276 1918 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 15:19:58.047663 update_engine[1918]: I20250213 15:19:58.047291 1918 update_attempter.cc:306] Processing Done. Feb 13 15:19:58.047663 update_engine[1918]: I20250213 15:19:58.047362 1918 update_attempter.cc:310] Error event sent. Feb 13 15:19:58.047663 update_engine[1918]: I20250213 15:19:58.047391 1918 update_check_scheduler.cc:74] Next update check in 44m48s Feb 13 15:19:58.048544 locksmithd[1960]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 15:19:58.048544 locksmithd[1960]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 15:20:11.433861 systemd[1]: cri-containerd-f4bd480fd36f168da794113f185f6867ab8542b6f3d2da1c82cbf9b89dfeb2ff.scope: Deactivated successfully. Feb 13 15:20:11.434915 systemd[1]: cri-containerd-f4bd480fd36f168da794113f185f6867ab8542b6f3d2da1c82cbf9b89dfeb2ff.scope: Consumed 6.058s CPU time, 21.9M memory peak, 0B memory swap peak. Feb 13 15:20:11.474944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4bd480fd36f168da794113f185f6867ab8542b6f3d2da1c82cbf9b89dfeb2ff-rootfs.mount: Deactivated successfully. Feb 13 15:20:11.498067 containerd[1944]: time="2025-02-13T15:20:11.497963700Z" level=info msg="shim disconnected" id=f4bd480fd36f168da794113f185f6867ab8542b6f3d2da1c82cbf9b89dfeb2ff namespace=k8s.io Feb 13 15:20:11.498067 containerd[1944]: time="2025-02-13T15:20:11.498050400Z" level=warning msg="cleaning up after shim disconnected" id=f4bd480fd36f168da794113f185f6867ab8542b6f3d2da1c82cbf9b89dfeb2ff namespace=k8s.io Feb 13 15:20:11.498067 containerd[1944]: time="2025-02-13T15:20:11.498072696Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:20:11.556837 kubelet[3242]: I0213 15:20:11.556754 3242 scope.go:117] "RemoveContainer" containerID="f4bd480fd36f168da794113f185f6867ab8542b6f3d2da1c82cbf9b89dfeb2ff" Feb 13 15:20:11.561841 containerd[1944]: time="2025-02-13T15:20:11.561775260Z" level=info msg="CreateContainer within sandbox \"6414b32b86e6723e394947926757cedcd644562d2d067934a7992c65ebdb8276\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 15:20:11.588965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3243433473.mount: Deactivated successfully. Feb 13 15:20:11.590182 containerd[1944]: time="2025-02-13T15:20:11.590006784Z" level=info msg="CreateContainer within sandbox \"6414b32b86e6723e394947926757cedcd644562d2d067934a7992c65ebdb8276\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6fa6f511fb172f095c8c30c6fcecd53c694e742f04856496b4c03466937a0d95\"" Feb 13 15:20:11.591444 containerd[1944]: time="2025-02-13T15:20:11.591383832Z" level=info msg="StartContainer for \"6fa6f511fb172f095c8c30c6fcecd53c694e742f04856496b4c03466937a0d95\"" Feb 13 15:20:11.641405 systemd[1]: Started cri-containerd-6fa6f511fb172f095c8c30c6fcecd53c694e742f04856496b4c03466937a0d95.scope - libcontainer container 6fa6f511fb172f095c8c30c6fcecd53c694e742f04856496b4c03466937a0d95. Feb 13 15:20:11.711716 containerd[1944]: time="2025-02-13T15:20:11.711638929Z" level=info msg="StartContainer for \"6fa6f511fb172f095c8c30c6fcecd53c694e742f04856496b4c03466937a0d95\" returns successfully" Feb 13 15:20:13.378238 kubelet[3242]: E0213 15:20:13.378164 3242 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-87?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 15:20:15.941583 systemd[1]: cri-containerd-671658b527bdce86e0cab1cd4bb918add15648b5e100863db1d55db2cf918bc1.scope: Deactivated successfully. Feb 13 15:20:15.943364 systemd[1]: cri-containerd-671658b527bdce86e0cab1cd4bb918add15648b5e100863db1d55db2cf918bc1.scope: Consumed 4.393s CPU time, 15.8M memory peak, 0B memory swap peak. Feb 13 15:20:15.981782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-671658b527bdce86e0cab1cd4bb918add15648b5e100863db1d55db2cf918bc1-rootfs.mount: Deactivated successfully. Feb 13 15:20:15.995993 containerd[1944]: time="2025-02-13T15:20:15.995874354Z" level=info msg="shim disconnected" id=671658b527bdce86e0cab1cd4bb918add15648b5e100863db1d55db2cf918bc1 namespace=k8s.io Feb 13 15:20:15.995993 containerd[1944]: time="2025-02-13T15:20:15.995953062Z" level=warning msg="cleaning up after shim disconnected" id=671658b527bdce86e0cab1cd4bb918add15648b5e100863db1d55db2cf918bc1 namespace=k8s.io Feb 13 15:20:15.995993 containerd[1944]: time="2025-02-13T15:20:15.995976858Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:20:16.577051 kubelet[3242]: I0213 15:20:16.577014 3242 scope.go:117] "RemoveContainer" containerID="671658b527bdce86e0cab1cd4bb918add15648b5e100863db1d55db2cf918bc1" Feb 13 15:20:16.581232 containerd[1944]: time="2025-02-13T15:20:16.581175965Z" level=info msg="CreateContainer within sandbox \"bee4981e0912e20c0b4e78f3c03adad45dacd483de98a34d44327d0d37c4b223\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 15:20:16.609580 containerd[1944]: time="2025-02-13T15:20:16.609413969Z" level=info msg="CreateContainer within sandbox \"bee4981e0912e20c0b4e78f3c03adad45dacd483de98a34d44327d0d37c4b223\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"72695516967c9122d36fba1354f52e9e4d9b7d07a4fcf0887f85665f3f3b9f45\"" Feb 13 15:20:16.612055 containerd[1944]: time="2025-02-13T15:20:16.610157849Z" level=info msg="StartContainer for \"72695516967c9122d36fba1354f52e9e4d9b7d07a4fcf0887f85665f3f3b9f45\"" Feb 13 15:20:16.666404 systemd[1]: Started cri-containerd-72695516967c9122d36fba1354f52e9e4d9b7d07a4fcf0887f85665f3f3b9f45.scope - libcontainer container 72695516967c9122d36fba1354f52e9e4d9b7d07a4fcf0887f85665f3f3b9f45. Feb 13 15:20:16.729383 containerd[1944]: time="2025-02-13T15:20:16.729311406Z" level=info msg="StartContainer for \"72695516967c9122d36fba1354f52e9e4d9b7d07a4fcf0887f85665f3f3b9f45\" returns successfully" Feb 13 15:20:23.378565 kubelet[3242]: E0213 15:20:23.378425 3242 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-87?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"