Jun 20 18:29:50.173474 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jun 20 18:29:50.173523 kernel: Linux version 6.6.94-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Fri Jun 20 17:15:00 -00 2025 Jun 20 18:29:50.173548 kernel: KASLR disabled due to lack of seed Jun 20 18:29:50.173564 kernel: efi: EFI v2.7 by EDK II Jun 20 18:29:50.173580 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Jun 20 18:29:50.173596 kernel: secureboot: Secure boot disabled Jun 20 18:29:50.173613 kernel: ACPI: Early table checksum verification disabled Jun 20 18:29:50.173629 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jun 20 18:29:50.173645 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jun 20 18:29:50.173661 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jun 20 18:29:50.173682 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jun 20 18:29:50.173698 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jun 20 18:29:50.173714 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jun 20 18:29:50.173757 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jun 20 18:29:50.173783 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jun 20 18:29:50.173807 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jun 20 18:29:50.173825 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jun 20 18:29:50.173841 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jun 20 18:29:50.173858 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jun 20 18:29:50.173874 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jun 20 18:29:50.173891 kernel: printk: bootconsole [uart0] enabled Jun 20 18:29:50.173907 kernel: NUMA: Failed to initialise from firmware Jun 20 18:29:50.173924 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jun 20 18:29:50.173941 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jun 20 18:29:50.173957 kernel: Zone ranges: Jun 20 18:29:50.173974 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jun 20 18:29:50.173994 kernel: DMA32 empty Jun 20 18:29:50.174033 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jun 20 18:29:50.174055 kernel: Movable zone start for each node Jun 20 18:29:50.174072 kernel: Early memory node ranges Jun 20 18:29:50.174088 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jun 20 18:29:50.174105 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jun 20 18:29:50.174121 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jun 20 18:29:50.174137 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jun 20 18:29:50.174153 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jun 20 18:29:50.174170 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jun 20 18:29:50.174186 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jun 20 18:29:50.174202 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jun 20 18:29:50.174224 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jun 20 18:29:50.174241 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jun 20 18:29:50.174264 kernel: psci: probing for conduit method from ACPI. Jun 20 18:29:50.174282 kernel: psci: PSCIv1.0 detected in firmware. Jun 20 18:29:50.174299 kernel: psci: Using standard PSCI v0.2 function IDs Jun 20 18:29:50.174320 kernel: psci: Trusted OS migration not required Jun 20 18:29:50.174337 kernel: psci: SMC Calling Convention v1.1 Jun 20 18:29:50.174355 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jun 20 18:29:50.174372 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jun 20 18:29:50.174390 kernel: pcpu-alloc: [0] 0 [0] 1 Jun 20 18:29:50.174407 kernel: Detected PIPT I-cache on CPU0 Jun 20 18:29:50.174424 kernel: CPU features: detected: GIC system register CPU interface Jun 20 18:29:50.174441 kernel: CPU features: detected: Spectre-v2 Jun 20 18:29:50.174458 kernel: CPU features: detected: Spectre-v3a Jun 20 18:29:50.174476 kernel: CPU features: detected: Spectre-BHB Jun 20 18:29:50.174493 kernel: CPU features: detected: ARM erratum 1742098 Jun 20 18:29:50.174510 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jun 20 18:29:50.174532 kernel: alternatives: applying boot alternatives Jun 20 18:29:50.174551 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8a081d870e25287d755f6d580d3ffafd8d53f08173c09683922f11f1a622a40e Jun 20 18:29:50.174570 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 18:29:50.174587 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 18:29:50.174605 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 18:29:50.174622 kernel: Fallback order for Node 0: 0 Jun 20 18:29:50.174639 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jun 20 18:29:50.174656 kernel: Policy zone: Normal Jun 20 18:29:50.174673 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 18:29:50.174691 kernel: software IO TLB: area num 2. Jun 20 18:29:50.174713 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jun 20 18:29:50.178138 kernel: Memory: 3821176K/4030464K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38336K init, 897K bss, 209288K reserved, 0K cma-reserved) Jun 20 18:29:50.178187 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 18:29:50.178206 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 18:29:50.178226 kernel: rcu: RCU event tracing is enabled. Jun 20 18:29:50.178244 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 18:29:50.178261 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 18:29:50.178281 kernel: Tracing variant of Tasks RCU enabled. Jun 20 18:29:50.178298 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 18:29:50.178317 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 18:29:50.178335 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 20 18:29:50.178363 kernel: GICv3: 96 SPIs implemented Jun 20 18:29:50.178383 kernel: GICv3: 0 Extended SPIs implemented Jun 20 18:29:50.178402 kernel: Root IRQ handler: gic_handle_irq Jun 20 18:29:50.178419 kernel: GICv3: GICv3 features: 16 PPIs Jun 20 18:29:50.178437 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jun 20 18:29:50.178455 kernel: ITS [mem 0x10080000-0x1009ffff] Jun 20 18:29:50.178472 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jun 20 18:29:50.178490 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jun 20 18:29:50.178507 kernel: GICv3: using LPI property table @0x00000004000d0000 Jun 20 18:29:50.178525 kernel: ITS: Using hypervisor restricted LPI range [128] Jun 20 18:29:50.178542 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jun 20 18:29:50.178559 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 18:29:50.178581 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jun 20 18:29:50.178598 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jun 20 18:29:50.178616 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jun 20 18:29:50.178633 kernel: Console: colour dummy device 80x25 Jun 20 18:29:50.178651 kernel: printk: console [tty1] enabled Jun 20 18:29:50.178669 kernel: ACPI: Core revision 20230628 Jun 20 18:29:50.178687 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jun 20 18:29:50.178705 kernel: pid_max: default: 32768 minimum: 301 Jun 20 18:29:50.178739 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jun 20 18:29:50.178764 kernel: landlock: Up and running. Jun 20 18:29:50.178789 kernel: SELinux: Initializing. Jun 20 18:29:50.178807 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 18:29:50.178825 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 18:29:50.178844 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:29:50.178862 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:29:50.178879 kernel: rcu: Hierarchical SRCU implementation. Jun 20 18:29:50.178897 kernel: rcu: Max phase no-delay instances is 400. Jun 20 18:29:50.178915 kernel: Platform MSI: ITS@0x10080000 domain created Jun 20 18:29:50.178938 kernel: PCI/MSI: ITS@0x10080000 domain created Jun 20 18:29:50.178957 kernel: Remapping and enabling EFI services. Jun 20 18:29:50.178975 kernel: smp: Bringing up secondary CPUs ... Jun 20 18:29:50.178992 kernel: Detected PIPT I-cache on CPU1 Jun 20 18:29:50.179010 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jun 20 18:29:50.179028 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jun 20 18:29:50.179047 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jun 20 18:29:50.179064 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 18:29:50.179082 kernel: SMP: Total of 2 processors activated. Jun 20 18:29:50.179099 kernel: CPU features: detected: 32-bit EL0 Support Jun 20 18:29:50.179121 kernel: CPU features: detected: 32-bit EL1 Support Jun 20 18:29:50.179140 kernel: CPU features: detected: CRC32 instructions Jun 20 18:29:50.179168 kernel: CPU: All CPU(s) started at EL1 Jun 20 18:29:50.179191 kernel: alternatives: applying system-wide alternatives Jun 20 18:29:50.179210 kernel: devtmpfs: initialized Jun 20 18:29:50.179228 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 18:29:50.179248 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 18:29:50.179268 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 18:29:50.179287 kernel: SMBIOS 3.0.0 present. Jun 20 18:29:50.179311 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jun 20 18:29:50.179330 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 18:29:50.179349 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 20 18:29:50.179368 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 20 18:29:50.179388 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 20 18:29:50.179407 kernel: audit: initializing netlink subsys (disabled) Jun 20 18:29:50.179427 kernel: audit: type=2000 audit(0.221:1): state=initialized audit_enabled=0 res=1 Jun 20 18:29:50.179451 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 18:29:50.179470 kernel: cpuidle: using governor menu Jun 20 18:29:50.179490 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 20 18:29:50.179509 kernel: ASID allocator initialised with 65536 entries Jun 20 18:29:50.179528 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 18:29:50.179547 kernel: Serial: AMBA PL011 UART driver Jun 20 18:29:50.179567 kernel: Modules: 17744 pages in range for non-PLT usage Jun 20 18:29:50.179589 kernel: Modules: 509264 pages in range for PLT usage Jun 20 18:29:50.179609 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 18:29:50.179633 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 18:29:50.179653 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 20 18:29:50.179674 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 20 18:29:50.179694 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 18:29:50.179713 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 18:29:50.179785 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 20 18:29:50.179808 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 20 18:29:50.179827 kernel: ACPI: Added _OSI(Module Device) Jun 20 18:29:50.179845 kernel: ACPI: Added _OSI(Processor Device) Jun 20 18:29:50.179871 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 18:29:50.179890 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 18:29:50.179909 kernel: ACPI: Interpreter enabled Jun 20 18:29:50.179927 kernel: ACPI: Using GIC for interrupt routing Jun 20 18:29:50.179946 kernel: ACPI: MCFG table detected, 1 entries Jun 20 18:29:50.179964 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jun 20 18:29:50.180294 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 18:29:50.180493 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jun 20 18:29:50.180694 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jun 20 18:29:50.181000 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jun 20 18:29:50.181217 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jun 20 18:29:50.181244 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jun 20 18:29:50.181265 kernel: acpiphp: Slot [1] registered Jun 20 18:29:50.181285 kernel: acpiphp: Slot [2] registered Jun 20 18:29:50.181304 kernel: acpiphp: Slot [3] registered Jun 20 18:29:50.181323 kernel: acpiphp: Slot [4] registered Jun 20 18:29:50.181351 kernel: acpiphp: Slot [5] registered Jun 20 18:29:50.181370 kernel: acpiphp: Slot [6] registered Jun 20 18:29:50.181388 kernel: acpiphp: Slot [7] registered Jun 20 18:29:50.181406 kernel: acpiphp: Slot [8] registered Jun 20 18:29:50.181425 kernel: acpiphp: Slot [9] registered Jun 20 18:29:50.181443 kernel: acpiphp: Slot [10] registered Jun 20 18:29:50.181462 kernel: acpiphp: Slot [11] registered Jun 20 18:29:50.181480 kernel: acpiphp: Slot [12] registered Jun 20 18:29:50.181498 kernel: acpiphp: Slot [13] registered Jun 20 18:29:50.181517 kernel: acpiphp: Slot [14] registered Jun 20 18:29:50.181540 kernel: acpiphp: Slot [15] registered Jun 20 18:29:50.181558 kernel: acpiphp: Slot [16] registered Jun 20 18:29:50.181577 kernel: acpiphp: Slot [17] registered Jun 20 18:29:50.181596 kernel: acpiphp: Slot [18] registered Jun 20 18:29:50.181614 kernel: acpiphp: Slot [19] registered Jun 20 18:29:50.181633 kernel: acpiphp: Slot [20] registered Jun 20 18:29:50.181651 kernel: acpiphp: Slot [21] registered Jun 20 18:29:50.181669 kernel: acpiphp: Slot [22] registered Jun 20 18:29:50.181688 kernel: acpiphp: Slot [23] registered Jun 20 18:29:50.181710 kernel: acpiphp: Slot [24] registered Jun 20 18:29:50.181818 kernel: acpiphp: Slot [25] registered Jun 20 18:29:50.181840 kernel: acpiphp: Slot [26] registered Jun 20 18:29:50.181860 kernel: acpiphp: Slot [27] registered Jun 20 18:29:50.181879 kernel: acpiphp: Slot [28] registered Jun 20 18:29:50.181897 kernel: acpiphp: Slot [29] registered Jun 20 18:29:50.181915 kernel: acpiphp: Slot [30] registered Jun 20 18:29:50.181934 kernel: acpiphp: Slot [31] registered Jun 20 18:29:50.181952 kernel: PCI host bridge to bus 0000:00 Jun 20 18:29:50.182205 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jun 20 18:29:50.182405 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jun 20 18:29:50.182599 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jun 20 18:29:50.182812 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jun 20 18:29:50.183057 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jun 20 18:29:50.183287 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jun 20 18:29:50.183493 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jun 20 18:29:50.183717 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jun 20 18:29:50.183982 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jun 20 18:29:50.184186 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jun 20 18:29:50.184404 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jun 20 18:29:50.184611 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jun 20 18:29:50.187083 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jun 20 18:29:50.187330 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jun 20 18:29:50.187537 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jun 20 18:29:50.187802 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jun 20 18:29:50.188033 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jun 20 18:29:50.188256 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jun 20 18:29:50.188462 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jun 20 18:29:50.188672 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jun 20 18:29:50.189960 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jun 20 18:29:50.190181 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jun 20 18:29:50.190363 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jun 20 18:29:50.190389 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jun 20 18:29:50.190409 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jun 20 18:29:50.190428 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jun 20 18:29:50.190447 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jun 20 18:29:50.190465 kernel: iommu: Default domain type: Translated Jun 20 18:29:50.190484 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 20 18:29:50.190509 kernel: efivars: Registered efivars operations Jun 20 18:29:50.190528 kernel: vgaarb: loaded Jun 20 18:29:50.190546 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 20 18:29:50.190565 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 18:29:50.190583 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 18:29:50.190602 kernel: pnp: PnP ACPI init Jun 20 18:29:50.191859 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jun 20 18:29:50.191891 kernel: pnp: PnP ACPI: found 1 devices Jun 20 18:29:50.191918 kernel: NET: Registered PF_INET protocol family Jun 20 18:29:50.191937 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 18:29:50.191957 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 20 18:29:50.191976 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 18:29:50.191995 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 18:29:50.192014 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 20 18:29:50.192033 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 20 18:29:50.192051 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 18:29:50.192070 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 18:29:50.192093 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 18:29:50.192112 kernel: PCI: CLS 0 bytes, default 64 Jun 20 18:29:50.192131 kernel: kvm [1]: HYP mode not available Jun 20 18:29:50.192150 kernel: Initialise system trusted keyrings Jun 20 18:29:50.192168 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 20 18:29:50.192187 kernel: Key type asymmetric registered Jun 20 18:29:50.192205 kernel: Asymmetric key parser 'x509' registered Jun 20 18:29:50.192224 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 18:29:50.192242 kernel: io scheduler mq-deadline registered Jun 20 18:29:50.192265 kernel: io scheduler kyber registered Jun 20 18:29:50.192284 kernel: io scheduler bfq registered Jun 20 18:29:50.192497 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jun 20 18:29:50.192524 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jun 20 18:29:50.192543 kernel: ACPI: button: Power Button [PWRB] Jun 20 18:29:50.192562 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jun 20 18:29:50.192581 kernel: ACPI: button: Sleep Button [SLPB] Jun 20 18:29:50.192599 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 18:29:50.192624 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jun 20 18:29:50.194993 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jun 20 18:29:50.195044 kernel: printk: console [ttyS0] disabled Jun 20 18:29:50.195065 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jun 20 18:29:50.195089 kernel: printk: console [ttyS0] enabled Jun 20 18:29:50.195108 kernel: printk: bootconsole [uart0] disabled Jun 20 18:29:50.195128 kernel: thunder_xcv, ver 1.0 Jun 20 18:29:50.195148 kernel: thunder_bgx, ver 1.0 Jun 20 18:29:50.195168 kernel: nicpf, ver 1.0 Jun 20 18:29:50.195189 kernel: nicvf, ver 1.0 Jun 20 18:29:50.195445 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 20 18:29:50.195638 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-06-20T18:29:49 UTC (1750444189) Jun 20 18:29:50.195664 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 18:29:50.195684 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jun 20 18:29:50.195704 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jun 20 18:29:50.195752 kernel: watchdog: Hard watchdog permanently disabled Jun 20 18:29:50.195778 kernel: NET: Registered PF_INET6 protocol family Jun 20 18:29:50.195806 kernel: Segment Routing with IPv6 Jun 20 18:29:50.195825 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 18:29:50.195844 kernel: NET: Registered PF_PACKET protocol family Jun 20 18:29:50.195864 kernel: Key type dns_resolver registered Jun 20 18:29:50.195975 kernel: registered taskstats version 1 Jun 20 18:29:50.196080 kernel: Loading compiled-in X.509 certificates Jun 20 18:29:50.196099 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.94-flatcar: 8506faa781fda315da94c2790de0e5c860361c93' Jun 20 18:29:50.196118 kernel: Key type .fscrypt registered Jun 20 18:29:50.196136 kernel: Key type fscrypt-provisioning registered Jun 20 18:29:50.196154 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 18:29:50.196179 kernel: ima: Allocated hash algorithm: sha1 Jun 20 18:29:50.196198 kernel: ima: No architecture policies found Jun 20 18:29:50.196216 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 20 18:29:50.196234 kernel: clk: Disabling unused clocks Jun 20 18:29:50.196253 kernel: Freeing unused kernel memory: 38336K Jun 20 18:29:50.196271 kernel: Run /init as init process Jun 20 18:29:50.196289 kernel: with arguments: Jun 20 18:29:50.196308 kernel: /init Jun 20 18:29:50.196326 kernel: with environment: Jun 20 18:29:50.196349 kernel: HOME=/ Jun 20 18:29:50.196367 kernel: TERM=linux Jun 20 18:29:50.196385 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 18:29:50.196406 systemd[1]: Successfully made /usr/ read-only. Jun 20 18:29:50.196431 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:29:50.196452 systemd[1]: Detected virtualization amazon. Jun 20 18:29:50.196472 systemd[1]: Detected architecture arm64. Jun 20 18:29:50.196495 systemd[1]: Running in initrd. Jun 20 18:29:50.196515 systemd[1]: No hostname configured, using default hostname. Jun 20 18:29:50.196535 systemd[1]: Hostname set to . Jun 20 18:29:50.196555 systemd[1]: Initializing machine ID from VM UUID. Jun 20 18:29:50.196575 systemd[1]: Queued start job for default target initrd.target. Jun 20 18:29:50.196594 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:29:50.196615 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:29:50.196636 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 18:29:50.196661 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:29:50.196682 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 18:29:50.196703 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 18:29:50.198758 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 18:29:50.198799 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 18:29:50.198820 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:29:50.198841 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:29:50.198870 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:29:50.198891 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:29:50.198911 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:29:50.198931 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:29:50.198951 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:29:50.198971 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:29:50.198992 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 18:29:50.199012 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 18:29:50.199032 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:29:50.199057 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:29:50.199078 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:29:50.199098 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:29:50.199118 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 18:29:50.199138 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:29:50.199158 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 18:29:50.199178 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 18:29:50.199198 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:29:50.199223 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:29:50.199243 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:29:50.199263 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 18:29:50.199341 systemd-journald[252]: Collecting audit messages is disabled. Jun 20 18:29:50.199389 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:29:50.199411 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 18:29:50.199432 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 18:29:50.199452 systemd-journald[252]: Journal started Jun 20 18:29:50.199495 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2da9d1ab7fc4fd8424404e944f314d) is 8M, max 75.3M, 67.3M free. Jun 20 18:29:50.169298 systemd-modules-load[254]: Inserted module 'overlay' Jun 20 18:29:50.214776 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:29:50.214857 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 18:29:50.220583 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:29:50.226375 kernel: Bridge firewalling registered Jun 20 18:29:50.221623 systemd-modules-load[254]: Inserted module 'br_netfilter' Jun 20 18:29:50.231158 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:29:50.251934 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:29:50.258435 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:29:50.261114 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:29:50.262477 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 18:29:50.292350 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:29:50.315909 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:29:50.322126 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:29:50.333088 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 18:29:50.341810 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:29:50.348934 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:29:50.367275 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:29:50.382092 dracut-cmdline[286]: dracut-dracut-053 Jun 20 18:29:50.386861 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8a081d870e25287d755f6d580d3ffafd8d53f08173c09683922f11f1a622a40e Jun 20 18:29:50.470091 systemd-resolved[291]: Positive Trust Anchors: Jun 20 18:29:50.470129 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:29:50.470192 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:29:50.526765 kernel: SCSI subsystem initialized Jun 20 18:29:50.535749 kernel: Loading iSCSI transport class v2.0-870. Jun 20 18:29:50.546761 kernel: iscsi: registered transport (tcp) Jun 20 18:29:50.569892 kernel: iscsi: registered transport (qla4xxx) Jun 20 18:29:50.569971 kernel: QLogic iSCSI HBA Driver Jun 20 18:29:50.680756 kernel: random: crng init done Jun 20 18:29:50.681012 systemd-resolved[291]: Defaulting to hostname 'linux'. Jun 20 18:29:50.684621 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:29:50.687295 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:29:50.714324 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 18:29:50.725134 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 18:29:50.758224 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 18:29:50.758303 kernel: device-mapper: uevent: version 1.0.3 Jun 20 18:29:50.758329 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 20 18:29:50.825785 kernel: raid6: neonx8 gen() 6616 MB/s Jun 20 18:29:50.843758 kernel: raid6: neonx4 gen() 6580 MB/s Jun 20 18:29:50.860757 kernel: raid6: neonx2 gen() 5466 MB/s Jun 20 18:29:50.877757 kernel: raid6: neonx1 gen() 3953 MB/s Jun 20 18:29:50.894756 kernel: raid6: int64x8 gen() 3629 MB/s Jun 20 18:29:50.911757 kernel: raid6: int64x4 gen() 3692 MB/s Jun 20 18:29:50.928756 kernel: raid6: int64x2 gen() 3624 MB/s Jun 20 18:29:50.946677 kernel: raid6: int64x1 gen() 2768 MB/s Jun 20 18:29:50.946709 kernel: raid6: using algorithm neonx8 gen() 6616 MB/s Jun 20 18:29:50.964759 kernel: raid6: .... xor() 4737 MB/s, rmw enabled Jun 20 18:29:50.964796 kernel: raid6: using neon recovery algorithm Jun 20 18:29:50.973139 kernel: xor: measuring software checksum speed Jun 20 18:29:50.973188 kernel: 8regs : 12919 MB/sec Jun 20 18:29:50.974352 kernel: 32regs : 13042 MB/sec Jun 20 18:29:50.975652 kernel: arm64_neon : 9585 MB/sec Jun 20 18:29:50.975699 kernel: xor: using function: 32regs (13042 MB/sec) Jun 20 18:29:51.059772 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 18:29:51.079820 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:29:51.104025 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:29:51.142055 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jun 20 18:29:51.152768 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:29:51.168802 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 18:29:51.208008 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Jun 20 18:29:51.264247 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:29:51.277176 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:29:51.391529 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:29:51.406001 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 18:29:51.463782 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 18:29:51.466492 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:29:51.471202 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:29:51.483402 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:29:51.507122 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 18:29:51.541886 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:29:51.591553 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jun 20 18:29:51.591621 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jun 20 18:29:51.601557 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jun 20 18:29:51.601915 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jun 20 18:29:51.616762 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:79:89:a2:9a:43 Jun 20 18:29:51.621180 (udev-worker)[519]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:29:51.633896 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:29:51.634181 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:29:51.661433 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jun 20 18:29:51.661479 kernel: nvme nvme0: pci function 0000:00:04.0 Jun 20 18:29:51.644637 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:29:51.650152 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:29:51.674628 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 20 18:29:51.650423 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:29:51.654658 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:29:51.692165 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 18:29:51.692222 kernel: GPT:9289727 != 16777215 Jun 20 18:29:51.692249 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 18:29:51.692437 kernel: GPT:9289727 != 16777215 Jun 20 18:29:51.692467 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 18:29:51.692493 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 18:29:51.675096 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:29:51.681504 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:29:51.724624 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:29:51.735982 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:29:51.776251 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:29:51.813792 kernel: BTRFS: device fsid c1b254aa-fc5c-4606-9f4d-9a81b9ab3a0f devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (534) Jun 20 18:29:51.834773 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (517) Jun 20 18:29:51.912628 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jun 20 18:29:51.944564 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jun 20 18:29:51.987344 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 20 18:29:52.009135 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jun 20 18:29:52.011881 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jun 20 18:29:52.027967 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 18:29:52.043116 disk-uuid[662]: Primary Header is updated. Jun 20 18:29:52.043116 disk-uuid[662]: Secondary Entries is updated. Jun 20 18:29:52.043116 disk-uuid[662]: Secondary Header is updated. Jun 20 18:29:52.052778 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 18:29:53.070762 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 18:29:53.072832 disk-uuid[663]: The operation has completed successfully. Jun 20 18:29:53.251139 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 18:29:53.251685 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 18:29:53.346053 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 18:29:53.356351 sh[923]: Success Jun 20 18:29:53.373799 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 20 18:29:53.475714 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 18:29:53.499982 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 18:29:53.510323 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 18:29:53.543071 kernel: BTRFS info (device dm-0): first mount of filesystem c1b254aa-fc5c-4606-9f4d-9a81b9ab3a0f Jun 20 18:29:53.543135 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:29:53.543161 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 20 18:29:53.544526 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 20 18:29:53.545753 kernel: BTRFS info (device dm-0): using free space tree Jun 20 18:29:53.673770 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 20 18:29:53.704047 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 18:29:53.708364 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 18:29:53.724049 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 18:29:53.730989 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 18:29:53.785784 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 18:29:53.785856 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:29:53.787295 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 20 18:29:53.794787 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 20 18:29:53.803806 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 18:29:53.807840 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 18:29:53.818091 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 18:29:53.900983 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:29:53.914151 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:29:53.971550 systemd-networkd[1113]: lo: Link UP Jun 20 18:29:53.971572 systemd-networkd[1113]: lo: Gained carrier Jun 20 18:29:53.975630 systemd-networkd[1113]: Enumeration completed Jun 20 18:29:53.976602 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:29:53.976793 systemd-networkd[1113]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:29:53.978202 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:29:53.985458 systemd[1]: Reached target network.target - Network. Jun 20 18:29:53.986161 systemd-networkd[1113]: eth0: Link UP Jun 20 18:29:53.986169 systemd-networkd[1113]: eth0: Gained carrier Jun 20 18:29:53.986186 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:29:54.025804 systemd-networkd[1113]: eth0: DHCPv4 address 172.31.22.87/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 20 18:29:54.220976 ignition[1044]: Ignition 2.20.0 Jun 20 18:29:54.221008 ignition[1044]: Stage: fetch-offline Jun 20 18:29:54.221453 ignition[1044]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:29:54.221563 ignition[1044]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:29:54.224564 ignition[1044]: Ignition finished successfully Jun 20 18:29:54.229313 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:29:54.257166 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 18:29:54.277967 ignition[1125]: Ignition 2.20.0 Jun 20 18:29:54.277998 ignition[1125]: Stage: fetch Jun 20 18:29:54.278588 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:29:54.278626 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:29:54.278827 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:29:54.291933 ignition[1125]: PUT result: OK Jun 20 18:29:54.294931 ignition[1125]: parsed url from cmdline: "" Jun 20 18:29:54.294946 ignition[1125]: no config URL provided Jun 20 18:29:54.294960 ignition[1125]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:29:54.294985 ignition[1125]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:29:54.295019 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:29:54.296926 ignition[1125]: PUT result: OK Jun 20 18:29:54.299222 ignition[1125]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jun 20 18:29:54.307750 ignition[1125]: GET result: OK Jun 20 18:29:54.309185 ignition[1125]: parsing config with SHA512: 7ba1347512367b75b88c18de75dc8998b7b1c0e27dabd7ccdfd2cacab451f1ca8288ac1addca4b3888a3f9fa8c2aa577b3450e55cb08a8f80a57830fdf3cc4eb Jun 20 18:29:54.318680 unknown[1125]: fetched base config from "system" Jun 20 18:29:54.321244 unknown[1125]: fetched base config from "system" Jun 20 18:29:54.321260 unknown[1125]: fetched user config from "aws" Jun 20 18:29:54.322517 ignition[1125]: fetch: fetch complete Jun 20 18:29:54.322530 ignition[1125]: fetch: fetch passed Jun 20 18:29:54.322625 ignition[1125]: Ignition finished successfully Jun 20 18:29:54.332342 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 18:29:54.344009 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 18:29:54.373602 ignition[1132]: Ignition 2.20.0 Jun 20 18:29:54.373631 ignition[1132]: Stage: kargs Jun 20 18:29:54.374719 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:29:54.375251 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:29:54.375417 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:29:54.379069 ignition[1132]: PUT result: OK Jun 20 18:29:54.389402 ignition[1132]: kargs: kargs passed Jun 20 18:29:54.389543 ignition[1132]: Ignition finished successfully Jun 20 18:29:54.394820 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 18:29:54.407982 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 18:29:54.428946 ignition[1139]: Ignition 2.20.0 Jun 20 18:29:54.428975 ignition[1139]: Stage: disks Jun 20 18:29:54.429583 ignition[1139]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:29:54.429609 ignition[1139]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:29:54.429803 ignition[1139]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:29:54.431887 ignition[1139]: PUT result: OK Jun 20 18:29:54.443552 ignition[1139]: disks: disks passed Jun 20 18:29:54.443690 ignition[1139]: Ignition finished successfully Jun 20 18:29:54.446325 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 18:29:54.452936 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 18:29:54.458059 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 18:29:54.461013 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:29:54.463126 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:29:54.465275 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:29:54.483039 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 18:29:54.527186 systemd-fsck[1147]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 20 18:29:54.534404 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 18:29:54.550892 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 18:29:54.629990 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f172a629-efc5-4850-a631-f3c62b46134c r/w with ordered data mode. Quota mode: none. Jun 20 18:29:54.631218 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 18:29:54.632989 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 18:29:54.646886 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:29:54.660995 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 18:29:54.663390 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 20 18:29:54.663469 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 18:29:54.663519 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:29:54.672249 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 18:29:54.691116 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 18:29:54.705840 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1166) Jun 20 18:29:54.710288 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 18:29:54.710336 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:29:54.711697 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 20 18:29:54.717761 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 20 18:29:54.720529 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:29:55.125140 initrd-setup-root[1190]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 18:29:55.156255 initrd-setup-root[1197]: cut: /sysroot/etc/group: No such file or directory Jun 20 18:29:55.164778 initrd-setup-root[1204]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 18:29:55.173020 initrd-setup-root[1211]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 18:29:55.514272 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 18:29:55.521924 systemd-networkd[1113]: eth0: Gained IPv6LL Jun 20 18:29:55.528048 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 18:29:55.534986 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 18:29:55.558157 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 18:29:55.562519 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 18:29:55.590926 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 18:29:55.603989 ignition[1278]: INFO : Ignition 2.20.0 Jun 20 18:29:55.606669 ignition[1278]: INFO : Stage: mount Jun 20 18:29:55.606669 ignition[1278]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:29:55.606669 ignition[1278]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:29:55.606669 ignition[1278]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:29:55.617852 ignition[1278]: INFO : PUT result: OK Jun 20 18:29:55.619398 ignition[1278]: INFO : mount: mount passed Jun 20 18:29:55.620982 ignition[1278]: INFO : Ignition finished successfully Jun 20 18:29:55.623931 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 18:29:55.636943 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 18:29:55.661050 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:29:55.685756 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1290) Jun 20 18:29:55.690759 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 18:29:55.690813 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:29:55.690839 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 20 18:29:55.695764 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 20 18:29:55.699586 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:29:55.737743 ignition[1307]: INFO : Ignition 2.20.0 Jun 20 18:29:55.737743 ignition[1307]: INFO : Stage: files Jun 20 18:29:55.741387 ignition[1307]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:29:55.741387 ignition[1307]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:29:55.741387 ignition[1307]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:29:55.748883 ignition[1307]: INFO : PUT result: OK Jun 20 18:29:55.753202 ignition[1307]: DEBUG : files: compiled without relabeling support, skipping Jun 20 18:29:55.767110 ignition[1307]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 18:29:55.767110 ignition[1307]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 18:29:55.784744 ignition[1307]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 18:29:55.788015 ignition[1307]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 18:29:55.791343 unknown[1307]: wrote ssh authorized keys file for user: core Jun 20 18:29:55.795896 ignition[1307]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 18:29:55.799152 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jun 20 18:29:55.803100 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jun 20 18:29:55.921773 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 18:29:56.077881 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jun 20 18:29:56.081909 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:29:56.081909 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jun 20 18:29:56.549298 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 18:29:56.690557 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:29:56.695333 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 18:29:56.695333 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 18:29:56.695333 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:29:56.695333 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:29:56.695333 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:29:56.695333 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:29:56.695333 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:29:56.695333 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:29:56.695333 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:29:56.695333 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:29:56.695333 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 18:29:56.695333 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 18:29:56.695333 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 18:29:56.695333 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jun 20 18:29:57.385232 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 18:29:57.726824 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 18:29:57.726824 ignition[1307]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 18:29:57.740306 ignition[1307]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:29:57.744284 ignition[1307]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:29:57.744284 ignition[1307]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 18:29:57.744284 ignition[1307]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 20 18:29:57.744284 ignition[1307]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 18:29:57.744284 ignition[1307]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:29:57.744284 ignition[1307]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:29:57.744284 ignition[1307]: INFO : files: files passed Jun 20 18:29:57.744284 ignition[1307]: INFO : Ignition finished successfully Jun 20 18:29:57.752990 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 18:29:57.779072 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 18:29:57.783589 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 18:29:57.798549 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 18:29:57.798807 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 18:29:57.818590 initrd-setup-root-after-ignition[1336]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:29:57.818590 initrd-setup-root-after-ignition[1336]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:29:57.827088 initrd-setup-root-after-ignition[1340]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:29:57.831102 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:29:57.841180 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 18:29:57.858106 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 18:29:57.904288 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 18:29:57.904469 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 18:29:57.909024 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 18:29:57.917511 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 18:29:57.919824 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 18:29:57.935058 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 18:29:57.962367 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:29:57.983072 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 18:29:58.005867 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:29:58.010991 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:29:58.013604 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 18:29:58.015606 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 18:29:58.015858 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:29:58.018715 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 18:29:58.021135 systemd[1]: Stopped target basic.target - Basic System. Jun 20 18:29:58.023198 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 18:29:58.025549 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:29:58.028307 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 18:29:58.049546 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 18:29:58.053041 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:29:58.057324 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 18:29:58.064177 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 18:29:58.066617 systemd[1]: Stopped target swap.target - Swaps. Jun 20 18:29:58.071769 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 18:29:58.072176 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:29:58.078398 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:29:58.082904 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:29:58.085481 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 18:29:58.085819 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:29:58.095257 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 18:29:58.095480 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 18:29:58.102496 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 18:29:58.102912 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:29:58.110347 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 18:29:58.110552 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 18:29:58.129780 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 18:29:58.131814 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 18:29:58.132437 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:29:58.162538 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 18:29:58.167658 ignition[1360]: INFO : Ignition 2.20.0 Jun 20 18:29:58.167658 ignition[1360]: INFO : Stage: umount Jun 20 18:29:58.167658 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:29:58.167658 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:29:58.167658 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:29:58.167658 ignition[1360]: INFO : PUT result: OK Jun 20 18:29:58.189212 ignition[1360]: INFO : umount: umount passed Jun 20 18:29:58.189212 ignition[1360]: INFO : Ignition finished successfully Jun 20 18:29:58.168883 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 18:29:58.172964 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:29:58.181540 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 18:29:58.182053 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:29:58.197980 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 18:29:58.199771 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 18:29:58.216888 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 18:29:58.217354 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 18:29:58.231458 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 18:29:58.231629 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 18:29:58.239242 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 18:29:58.239342 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 18:29:58.242656 systemd[1]: Stopped target network.target - Network. Jun 20 18:29:58.257784 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 18:29:58.258031 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:29:58.266916 systemd[1]: Stopped target paths.target - Path Units. Jun 20 18:29:58.268909 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 18:29:58.270808 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:29:58.273682 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 18:29:58.275602 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 18:29:58.277708 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 18:29:58.277815 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:29:58.279911 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 18:29:58.279977 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:29:58.282440 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 18:29:58.282532 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 18:29:58.284718 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 18:29:58.284826 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 18:29:58.287223 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 18:29:58.289656 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 18:29:58.306686 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 18:29:58.308305 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 18:29:58.309287 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 18:29:58.319391 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 18:29:58.320571 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 18:29:58.322797 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 18:29:58.350404 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 18:29:58.351267 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 18:29:58.352789 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 18:29:58.363435 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 18:29:58.363516 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:29:58.382067 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 18:29:58.386370 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 18:29:58.386489 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:29:58.392459 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:29:58.392571 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:29:58.400969 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 18:29:58.401071 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 18:29:58.404069 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 18:29:58.404166 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:29:58.407062 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:29:58.424209 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 18:29:58.424347 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:29:58.425182 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 18:29:58.425357 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 18:29:58.453607 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 18:29:58.456097 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 18:29:58.464885 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 18:29:58.467982 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:29:58.472462 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 18:29:58.472552 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 18:29:58.481503 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 18:29:58.481581 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:29:58.483775 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 18:29:58.483862 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:29:58.488371 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 18:29:58.488462 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 18:29:58.501404 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:29:58.501500 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:29:58.515073 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 18:29:58.518866 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 18:29:58.518989 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:29:58.522182 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:29:58.522271 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:29:58.539101 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 18:29:58.539222 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:29:58.547681 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 18:29:58.547908 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 18:29:58.564546 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 18:29:58.564963 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 18:29:58.572311 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 18:29:58.589432 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 18:29:58.605885 systemd[1]: Switching root. Jun 20 18:29:58.647835 systemd-journald[252]: Journal stopped Jun 20 18:30:00.962227 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Jun 20 18:30:00.962353 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 18:30:00.962394 kernel: SELinux: policy capability open_perms=1 Jun 20 18:30:00.962424 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 18:30:00.962454 kernel: SELinux: policy capability always_check_network=0 Jun 20 18:30:00.962483 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 18:30:00.962512 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 18:30:00.962540 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 18:30:00.962577 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 18:30:00.962607 kernel: audit: type=1403 audit(1750444199.042:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 18:30:00.962652 systemd[1]: Successfully loaded SELinux policy in 83.786ms. Jun 20 18:30:00.962702 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 24.048ms. Jun 20 18:30:00.962773 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:30:00.962809 systemd[1]: Detected virtualization amazon. Jun 20 18:30:00.962838 systemd[1]: Detected architecture arm64. Jun 20 18:30:00.962869 systemd[1]: Detected first boot. Jun 20 18:30:00.962903 systemd[1]: Initializing machine ID from VM UUID. Jun 20 18:30:00.962934 zram_generator::config[1405]: No configuration found. Jun 20 18:30:00.962965 kernel: NET: Registered PF_VSOCK protocol family Jun 20 18:30:00.962995 systemd[1]: Populated /etc with preset unit settings. Jun 20 18:30:00.963027 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 18:30:00.963058 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 18:30:00.963091 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 18:30:00.963120 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 18:30:00.963160 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 18:30:00.963196 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 18:30:00.963227 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 18:30:00.963255 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 18:30:00.963285 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 18:30:00.963316 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 18:30:00.963346 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 18:30:00.963376 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 18:30:00.963406 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:30:00.963440 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:30:00.963471 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 18:30:00.963499 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 18:30:00.963531 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 18:30:00.963564 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:30:00.963594 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 18:30:00.963624 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:30:00.963655 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 18:30:00.963689 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 18:30:00.963720 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 18:30:00.966523 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 18:30:00.966556 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:30:00.966589 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:30:00.966618 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:30:00.966650 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:30:00.966679 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 18:30:00.966708 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 18:30:00.966782 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 18:30:00.967249 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:30:00.967934 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:30:00.967971 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:30:00.968002 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 18:30:00.968031 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 18:30:00.968062 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 18:30:00.968090 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 18:30:00.968121 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 18:30:00.968155 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 18:30:00.968184 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 18:30:00.968213 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 18:30:00.968243 systemd[1]: Reached target machines.target - Containers. Jun 20 18:30:00.968273 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 18:30:00.968304 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:30:00.968333 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:30:00.968361 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 18:30:00.968394 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:30:00.968424 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:30:00.968452 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:30:00.968484 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 18:30:00.968513 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:30:00.968541 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 18:30:00.968572 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 18:30:00.968601 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 18:30:00.968631 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 18:30:00.968664 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 18:30:00.968696 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:30:00.968795 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:30:00.968830 kernel: fuse: init (API version 7.39) Jun 20 18:30:00.968865 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:30:00.968894 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 18:30:00.968923 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 18:30:00.968953 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 18:30:00.968986 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:30:00.969026 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 18:30:00.969054 systemd[1]: Stopped verity-setup.service. Jun 20 18:30:00.969085 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 18:30:00.969114 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 18:30:00.969146 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 18:30:00.969457 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 18:30:00.969493 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 18:30:00.969524 kernel: loop: module loaded Jun 20 18:30:00.969552 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 18:30:00.969581 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:30:00.969616 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 18:30:00.969647 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 18:30:00.969678 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:30:00.969711 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:30:00.972800 kernel: ACPI: bus type drm_connector registered Jun 20 18:30:00.972854 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:30:00.972884 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:30:00.972913 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:30:00.972941 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:30:00.973036 systemd-journald[1488]: Collecting audit messages is disabled. Jun 20 18:30:00.973086 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 18:30:00.973119 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 18:30:00.973148 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:30:00.973176 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:30:00.973207 systemd-journald[1488]: Journal started Jun 20 18:30:00.973257 systemd-journald[1488]: Runtime Journal (/run/log/journal/ec2da9d1ab7fc4fd8424404e944f314d) is 8M, max 75.3M, 67.3M free. Jun 20 18:30:00.369055 systemd[1]: Queued start job for default target multi-user.target. Jun 20 18:30:00.381076 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 20 18:30:00.381924 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 18:30:00.983796 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:30:00.985846 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:30:00.989651 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:30:00.993906 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 18:30:00.997415 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 18:30:01.000445 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 18:30:01.029860 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 18:30:01.040931 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 18:30:01.049438 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 18:30:01.052934 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 18:30:01.052993 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:30:01.059086 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 18:30:01.070876 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 18:30:01.082067 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 18:30:01.084633 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:30:01.091037 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 18:30:01.100070 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 18:30:01.103340 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:30:01.106463 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 18:30:01.108863 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:30:01.116047 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:30:01.121958 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 18:30:01.127588 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 18:30:01.134877 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 18:30:01.137716 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 18:30:01.143808 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 18:30:01.171814 systemd-journald[1488]: Time spent on flushing to /var/log/journal/ec2da9d1ab7fc4fd8424404e944f314d is 119.415ms for 920 entries. Jun 20 18:30:01.171814 systemd-journald[1488]: System Journal (/var/log/journal/ec2da9d1ab7fc4fd8424404e944f314d) is 8M, max 195.6M, 187.6M free. Jun 20 18:30:01.316141 systemd-journald[1488]: Received client request to flush runtime journal. Jun 20 18:30:01.316243 kernel: loop0: detected capacity change from 0 to 123192 Jun 20 18:30:01.200844 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 18:30:01.208983 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 18:30:01.225159 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 18:30:01.270370 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:30:01.287792 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 18:30:01.322946 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 18:30:01.337698 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:30:01.352993 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 20 18:30:01.372722 udevadm[1555]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 20 18:30:01.381818 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 18:30:01.388931 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 18:30:01.401797 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 18:30:01.408349 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:30:01.453770 kernel: loop1: detected capacity change from 0 to 113512 Jun 20 18:30:01.479527 systemd-tmpfiles[1559]: ACLs are not supported, ignoring. Jun 20 18:30:01.479566 systemd-tmpfiles[1559]: ACLs are not supported, ignoring. Jun 20 18:30:01.492428 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:30:01.600787 kernel: loop2: detected capacity change from 0 to 211168 Jun 20 18:30:01.852756 kernel: loop3: detected capacity change from 0 to 53784 Jun 20 18:30:01.897781 kernel: loop4: detected capacity change from 0 to 123192 Jun 20 18:30:01.916322 kernel: loop5: detected capacity change from 0 to 113512 Jun 20 18:30:01.929788 kernel: loop6: detected capacity change from 0 to 211168 Jun 20 18:30:01.960981 kernel: loop7: detected capacity change from 0 to 53784 Jun 20 18:30:01.980232 (sd-merge)[1565]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jun 20 18:30:01.981330 (sd-merge)[1565]: Merged extensions into '/usr'. Jun 20 18:30:01.991487 systemd[1]: Reload requested from client PID 1540 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 18:30:01.991519 systemd[1]: Reloading... Jun 20 18:30:02.146766 zram_generator::config[1593]: No configuration found. Jun 20 18:30:02.523355 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:30:02.670761 systemd[1]: Reloading finished in 678 ms. Jun 20 18:30:02.694794 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 18:30:02.698253 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 18:30:02.719049 systemd[1]: Starting ensure-sysext.service... Jun 20 18:30:02.724149 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:30:02.732145 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:30:02.758788 systemd[1]: Reload requested from client PID 1645 ('systemctl') (unit ensure-sysext.service)... Jun 20 18:30:02.758815 systemd[1]: Reloading... Jun 20 18:30:02.775122 ldconfig[1535]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 18:30:02.805989 systemd-tmpfiles[1646]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 18:30:02.806505 systemd-tmpfiles[1646]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 18:30:02.809526 systemd-tmpfiles[1646]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 18:30:02.813018 systemd-tmpfiles[1646]: ACLs are not supported, ignoring. Jun 20 18:30:02.813150 systemd-tmpfiles[1646]: ACLs are not supported, ignoring. Jun 20 18:30:02.826381 systemd-tmpfiles[1646]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:30:02.826408 systemd-tmpfiles[1646]: Skipping /boot Jun 20 18:30:02.857016 systemd-udevd[1647]: Using default interface naming scheme 'v255'. Jun 20 18:30:02.873578 systemd-tmpfiles[1646]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:30:02.873611 systemd-tmpfiles[1646]: Skipping /boot Jun 20 18:30:02.915770 zram_generator::config[1676]: No configuration found. Jun 20 18:30:03.138611 (udev-worker)[1683]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:30:03.285813 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1679) Jun 20 18:30:03.390012 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:30:03.578476 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 18:30:03.579008 systemd[1]: Reloading finished in 819 ms. Jun 20 18:30:03.595792 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:30:03.599522 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 18:30:03.602516 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:30:03.727601 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 20 18:30:03.736033 systemd[1]: Finished ensure-sysext.service. Jun 20 18:30:03.777309 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 20 18:30:03.787064 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:30:03.792207 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 18:30:03.795343 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:30:03.810908 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 20 18:30:03.816126 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:30:03.824303 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:30:03.835118 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:30:03.848812 lvm[1847]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 18:30:03.850705 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:30:03.860360 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:30:03.869220 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 18:30:03.871858 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:30:03.879273 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 18:30:03.897205 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:30:03.913037 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:30:03.915316 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 18:30:03.932528 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 18:30:03.939050 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:30:03.945019 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:30:03.945912 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:30:03.949649 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:30:03.953859 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:30:03.956694 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:30:03.957311 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:30:03.960438 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:30:03.960860 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:30:03.977785 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:30:03.977918 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:30:03.990988 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 18:30:04.012839 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 18:30:04.032027 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 18:30:04.053291 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 20 18:30:04.057630 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 18:30:04.068547 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:30:04.080300 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 20 18:30:04.092501 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 18:30:04.095869 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 18:30:04.100603 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 18:30:04.122756 lvm[1886]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 18:30:04.145285 augenrules[1892]: No rules Jun 20 18:30:04.145572 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 18:30:04.150651 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:30:04.151472 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:30:04.181106 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 20 18:30:04.191755 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 18:30:04.213167 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:30:04.320197 systemd-networkd[1860]: lo: Link UP Jun 20 18:30:04.320213 systemd-networkd[1860]: lo: Gained carrier Jun 20 18:30:04.323956 systemd-networkd[1860]: Enumeration completed Jun 20 18:30:04.324353 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:30:04.327358 systemd-networkd[1860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:30:04.327529 systemd-networkd[1860]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:30:04.329499 systemd-networkd[1860]: eth0: Link UP Jun 20 18:30:04.330021 systemd-networkd[1860]: eth0: Gained carrier Jun 20 18:30:04.330157 systemd-networkd[1860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:30:04.336069 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 18:30:04.341511 systemd-resolved[1862]: Positive Trust Anchors: Jun 20 18:30:04.341554 systemd-resolved[1862]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:30:04.341616 systemd-resolved[1862]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:30:04.345864 systemd-networkd[1860]: eth0: DHCPv4 address 172.31.22.87/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 20 18:30:04.348909 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 18:30:04.359838 systemd-resolved[1862]: Defaulting to hostname 'linux'. Jun 20 18:30:04.367404 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:30:04.370327 systemd[1]: Reached target network.target - Network. Jun 20 18:30:04.380517 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:30:04.382988 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:30:04.385522 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 18:30:04.388167 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 18:30:04.390967 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 18:30:04.393481 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 18:30:04.398088 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 18:30:04.400820 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 18:30:04.400977 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:30:04.402863 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:30:04.405408 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 18:30:04.411661 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 18:30:04.418620 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 18:30:04.423155 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 18:30:04.425804 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 18:30:04.431619 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 18:30:04.434539 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 18:30:04.438683 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 18:30:04.441706 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 18:30:04.444985 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:30:04.447175 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:30:04.449394 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:30:04.449450 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:30:04.458655 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 18:30:04.464648 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 18:30:04.475128 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 18:30:04.479322 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 18:30:04.491049 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 18:30:04.493220 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 18:30:04.496432 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 18:30:04.504001 systemd[1]: Started ntpd.service - Network Time Service. Jun 20 18:30:04.524990 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 18:30:04.528989 systemd[1]: Starting setup-oem.service - Setup OEM... Jun 20 18:30:04.537097 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 18:30:04.549519 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 18:30:04.563890 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 18:30:04.576528 jq[1919]: false Jun 20 18:30:04.567620 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 18:30:04.568515 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 18:30:04.572076 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 18:30:04.577947 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 18:30:04.607222 extend-filesystems[1920]: Found loop4 Jun 20 18:30:04.607222 extend-filesystems[1920]: Found loop5 Jun 20 18:30:04.607222 extend-filesystems[1920]: Found loop6 Jun 20 18:30:04.607222 extend-filesystems[1920]: Found loop7 Jun 20 18:30:04.607222 extend-filesystems[1920]: Found nvme0n1 Jun 20 18:30:04.607222 extend-filesystems[1920]: Found nvme0n1p1 Jun 20 18:30:04.607222 extend-filesystems[1920]: Found nvme0n1p2 Jun 20 18:30:04.607222 extend-filesystems[1920]: Found nvme0n1p3 Jun 20 18:30:04.607222 extend-filesystems[1920]: Found usr Jun 20 18:30:04.607222 extend-filesystems[1920]: Found nvme0n1p4 Jun 20 18:30:04.607222 extend-filesystems[1920]: Found nvme0n1p6 Jun 20 18:30:04.607222 extend-filesystems[1920]: Found nvme0n1p7 Jun 20 18:30:04.607222 extend-filesystems[1920]: Found nvme0n1p9 Jun 20 18:30:04.607222 extend-filesystems[1920]: Checking size of /dev/nvme0n1p9 Jun 20 18:30:04.587405 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 18:30:04.706243 extend-filesystems[1920]: Resized partition /dev/nvme0n1p9 Jun 20 18:30:04.647915 dbus-daemon[1918]: [system] SELinux support is enabled Jun 20 18:30:04.587954 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 18:30:04.672646 dbus-daemon[1918]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1860 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 20 18:30:04.649952 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 18:30:04.663492 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 18:30:04.663534 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 18:30:04.668980 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 18:30:04.669020 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 18:30:04.701027 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 20 18:30:04.718670 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 18:30:04.738745 extend-filesystems[1953]: resize2fs 1.47.1 (20-May-2024) Jun 20 18:30:04.719180 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 18:30:04.726429 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 18:30:04.727866 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 18:30:04.767767 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jun 20 18:30:04.777992 jq[1932]: true Jun 20 18:30:04.815517 tar[1940]: linux-arm64/LICENSE Jun 20 18:30:04.815517 tar[1940]: linux-arm64/helm Jun 20 18:30:04.822813 (ntainerd)[1963]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 18:30:04.825911 ntpd[1922]: ntpd 4.2.8p17@1.4004-o Fri Jun 20 16:33:12 UTC 2025 (1): Starting Jun 20 18:30:04.828342 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: ntpd 4.2.8p17@1.4004-o Fri Jun 20 16:33:12 UTC 2025 (1): Starting Jun 20 18:30:04.828342 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 20 18:30:04.828342 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: ---------------------------------------------------- Jun 20 18:30:04.828342 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: ntp-4 is maintained by Network Time Foundation, Jun 20 18:30:04.828342 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 20 18:30:04.828342 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: corporation. Support and training for ntp-4 are Jun 20 18:30:04.828342 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: available at https://www.nwtime.org/support Jun 20 18:30:04.828342 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: ---------------------------------------------------- Jun 20 18:30:04.825961 ntpd[1922]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 20 18:30:04.826000 ntpd[1922]: ---------------------------------------------------- Jun 20 18:30:04.826021 ntpd[1922]: ntp-4 is maintained by Network Time Foundation, Jun 20 18:30:04.826039 ntpd[1922]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 20 18:30:04.826057 ntpd[1922]: corporation. Support and training for ntp-4 are Jun 20 18:30:04.826073 ntpd[1922]: available at https://www.nwtime.org/support Jun 20 18:30:04.826090 ntpd[1922]: ---------------------------------------------------- Jun 20 18:30:04.840499 ntpd[1922]: proto: precision = 0.108 usec (-23) Jun 20 18:30:04.842869 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: proto: precision = 0.108 usec (-23) Jun 20 18:30:04.846422 ntpd[1922]: basedate set to 2025-06-08 Jun 20 18:30:04.846470 ntpd[1922]: gps base set to 2025-06-08 (week 2370) Jun 20 18:30:04.846690 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: basedate set to 2025-06-08 Jun 20 18:30:04.846690 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: gps base set to 2025-06-08 (week 2370) Jun 20 18:30:04.854782 update_engine[1931]: I20250620 18:30:04.854316 1931 main.cc:92] Flatcar Update Engine starting Jun 20 18:30:04.864099 jq[1965]: true Jun 20 18:30:04.858788 ntpd[1922]: Listen and drop on 0 v6wildcard [::]:123 Jun 20 18:30:04.864514 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: Listen and drop on 0 v6wildcard [::]:123 Jun 20 18:30:04.864514 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 20 18:30:04.864514 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: Listen normally on 2 lo 127.0.0.1:123 Jun 20 18:30:04.864514 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: Listen normally on 3 eth0 172.31.22.87:123 Jun 20 18:30:04.864514 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: Listen normally on 4 lo [::1]:123 Jun 20 18:30:04.864514 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: bind(21) AF_INET6 fe80::479:89ff:fea2:9a43%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 18:30:04.864514 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: unable to create socket on eth0 (5) for fe80::479:89ff:fea2:9a43%2#123 Jun 20 18:30:04.864514 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: failed to init interface for address fe80::479:89ff:fea2:9a43%2 Jun 20 18:30:04.864514 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: Listening on routing socket on fd #21 for interface updates Jun 20 18:30:04.858870 ntpd[1922]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 20 18:30:04.859130 ntpd[1922]: Listen normally on 2 lo 127.0.0.1:123 Jun 20 18:30:04.859190 ntpd[1922]: Listen normally on 3 eth0 172.31.22.87:123 Jun 20 18:30:04.859288 ntpd[1922]: Listen normally on 4 lo [::1]:123 Jun 20 18:30:04.859365 ntpd[1922]: bind(21) AF_INET6 fe80::479:89ff:fea2:9a43%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 18:30:04.859404 ntpd[1922]: unable to create socket on eth0 (5) for fe80::479:89ff:fea2:9a43%2#123 Jun 20 18:30:04.859431 ntpd[1922]: failed to init interface for address fe80::479:89ff:fea2:9a43%2 Jun 20 18:30:04.859481 ntpd[1922]: Listening on routing socket on fd #21 for interface updates Jun 20 18:30:04.901760 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jun 20 18:30:04.901883 update_engine[1931]: I20250620 18:30:04.899679 1931 update_check_scheduler.cc:74] Next update check in 2m16s Jun 20 18:30:04.885351 systemd[1]: Started update-engine.service - Update Engine. Jun 20 18:30:04.894135 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 18:30:04.914936 extend-filesystems[1953]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jun 20 18:30:04.914936 extend-filesystems[1953]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 20 18:30:04.914936 extend-filesystems[1953]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jun 20 18:30:04.911349 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 18:30:04.906384 ntpd[1922]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 18:30:04.927082 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 18:30:04.927082 ntpd[1922]: 20 Jun 18:30:04 ntpd[1922]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 18:30:04.927198 extend-filesystems[1920]: Resized filesystem in /dev/nvme0n1p9 Jun 20 18:30:04.911790 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 18:30:04.906453 ntpd[1922]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 18:30:05.008969 systemd[1]: Finished setup-oem.service - Setup OEM. Jun 20 18:30:05.013461 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 18:30:05.050763 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1683) Jun 20 18:30:05.068187 coreos-metadata[1917]: Jun 20 18:30:05.068 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 20 18:30:05.074962 coreos-metadata[1917]: Jun 20 18:30:05.069 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jun 20 18:30:05.074962 coreos-metadata[1917]: Jun 20 18:30:05.070 INFO Fetch successful Jun 20 18:30:05.074962 coreos-metadata[1917]: Jun 20 18:30:05.070 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jun 20 18:30:05.074962 coreos-metadata[1917]: Jun 20 18:30:05.071 INFO Fetch successful Jun 20 18:30:05.074962 coreos-metadata[1917]: Jun 20 18:30:05.071 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jun 20 18:30:05.083827 coreos-metadata[1917]: Jun 20 18:30:05.076 INFO Fetch successful Jun 20 18:30:05.083827 coreos-metadata[1917]: Jun 20 18:30:05.076 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jun 20 18:30:05.083827 coreos-metadata[1917]: Jun 20 18:30:05.076 INFO Fetch successful Jun 20 18:30:05.083827 coreos-metadata[1917]: Jun 20 18:30:05.076 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jun 20 18:30:05.083827 coreos-metadata[1917]: Jun 20 18:30:05.078 INFO Fetch failed with 404: resource not found Jun 20 18:30:05.083827 coreos-metadata[1917]: Jun 20 18:30:05.078 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jun 20 18:30:05.083827 coreos-metadata[1917]: Jun 20 18:30:05.079 INFO Fetch successful Jun 20 18:30:05.083827 coreos-metadata[1917]: Jun 20 18:30:05.079 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jun 20 18:30:05.083827 coreos-metadata[1917]: Jun 20 18:30:05.082 INFO Fetch successful Jun 20 18:30:05.083827 coreos-metadata[1917]: Jun 20 18:30:05.082 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jun 20 18:30:05.090008 coreos-metadata[1917]: Jun 20 18:30:05.085 INFO Fetch successful Jun 20 18:30:05.090008 coreos-metadata[1917]: Jun 20 18:30:05.085 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jun 20 18:30:05.090982 coreos-metadata[1917]: Jun 20 18:30:05.090 INFO Fetch successful Jun 20 18:30:05.090982 coreos-metadata[1917]: Jun 20 18:30:05.090 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jun 20 18:30:05.103561 coreos-metadata[1917]: Jun 20 18:30:05.102 INFO Fetch successful Jun 20 18:30:05.157434 bash[2004]: Updated "/home/core/.ssh/authorized_keys" Jun 20 18:30:05.169435 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 18:30:05.192262 systemd[1]: Starting sshkeys.service... Jun 20 18:30:05.247827 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 18:30:05.250505 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 18:30:05.270802 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 20 18:30:05.280922 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 20 18:30:05.474226 systemd-logind[1929]: Watching system buttons on /dev/input/event0 (Power Button) Jun 20 18:30:05.474325 systemd-logind[1929]: Watching system buttons on /dev/input/event1 (Sleep Button) Jun 20 18:30:05.478702 systemd-logind[1929]: New seat seat0. Jun 20 18:30:05.489093 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 18:30:05.598364 coreos-metadata[2035]: Jun 20 18:30:05.598 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 20 18:30:05.602256 locksmithd[1978]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 18:30:05.606393 coreos-metadata[2035]: Jun 20 18:30:05.605 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jun 20 18:30:05.620761 coreos-metadata[2035]: Jun 20 18:30:05.620 INFO Fetch successful Jun 20 18:30:05.620761 coreos-metadata[2035]: Jun 20 18:30:05.620 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 20 18:30:05.631898 coreos-metadata[2035]: Jun 20 18:30:05.631 INFO Fetch successful Jun 20 18:30:05.650857 unknown[2035]: wrote ssh authorized keys file for user: core Jun 20 18:30:05.685646 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 20 18:30:05.708117 dbus-daemon[1918]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 20 18:30:05.725569 dbus-daemon[1918]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1951 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 20 18:30:05.745451 systemd[1]: Starting polkit.service - Authorization Manager... Jun 20 18:30:05.779454 update-ssh-keys[2101]: Updated "/home/core/.ssh/authorized_keys" Jun 20 18:30:05.784225 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 20 18:30:05.793771 systemd[1]: Finished sshkeys.service. Jun 20 18:30:05.826787 ntpd[1922]: bind(24) AF_INET6 fe80::479:89ff:fea2:9a43%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 18:30:05.826849 ntpd[1922]: unable to create socket on eth0 (6) for fe80::479:89ff:fea2:9a43%2#123 Jun 20 18:30:05.827261 ntpd[1922]: 20 Jun 18:30:05 ntpd[1922]: bind(24) AF_INET6 fe80::479:89ff:fea2:9a43%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 18:30:05.827261 ntpd[1922]: 20 Jun 18:30:05 ntpd[1922]: unable to create socket on eth0 (6) for fe80::479:89ff:fea2:9a43%2#123 Jun 20 18:30:05.827261 ntpd[1922]: 20 Jun 18:30:05 ntpd[1922]: failed to init interface for address fe80::479:89ff:fea2:9a43%2 Jun 20 18:30:05.826877 ntpd[1922]: failed to init interface for address fe80::479:89ff:fea2:9a43%2 Jun 20 18:30:05.845783 containerd[1963]: time="2025-06-20T18:30:05.845069101Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jun 20 18:30:05.845421 polkitd[2108]: Started polkitd version 121 Jun 20 18:30:05.898685 polkitd[2108]: Loading rules from directory /etc/polkit-1/rules.d Jun 20 18:30:05.898835 polkitd[2108]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 20 18:30:05.903067 polkitd[2108]: Finished loading, compiling and executing 2 rules Jun 20 18:30:05.905325 dbus-daemon[1918]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 20 18:30:05.905596 systemd[1]: Started polkit.service - Authorization Manager. Jun 20 18:30:05.908232 polkitd[2108]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 20 18:30:05.958474 systemd-hostnamed[1951]: Hostname set to (transient) Jun 20 18:30:05.958475 systemd-resolved[1862]: System hostname changed to 'ip-172-31-22-87'. Jun 20 18:30:05.969585 containerd[1963]: time="2025-06-20T18:30:05.969524185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:30:05.972331 containerd[1963]: time="2025-06-20T18:30:05.972265417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.94-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:30:05.972489 containerd[1963]: time="2025-06-20T18:30:05.972461245Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 20 18:30:05.974760 containerd[1963]: time="2025-06-20T18:30:05.972763777Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 20 18:30:05.974760 containerd[1963]: time="2025-06-20T18:30:05.973064533Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 20 18:30:05.974760 containerd[1963]: time="2025-06-20T18:30:05.973097557Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 20 18:30:05.974760 containerd[1963]: time="2025-06-20T18:30:05.973216237Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:30:05.974760 containerd[1963]: time="2025-06-20T18:30:05.973243561Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:30:05.974760 containerd[1963]: time="2025-06-20T18:30:05.973578061Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:30:05.974760 containerd[1963]: time="2025-06-20T18:30:05.973610365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 20 18:30:05.974760 containerd[1963]: time="2025-06-20T18:30:05.973639645Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:30:05.974760 containerd[1963]: time="2025-06-20T18:30:05.973663129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 20 18:30:05.974760 containerd[1963]: time="2025-06-20T18:30:05.973871905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:30:05.974760 containerd[1963]: time="2025-06-20T18:30:05.974276269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:30:05.975270 containerd[1963]: time="2025-06-20T18:30:05.974503033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:30:05.975270 containerd[1963]: time="2025-06-20T18:30:05.974530417Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 20 18:30:05.975270 containerd[1963]: time="2025-06-20T18:30:05.974705221Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 20 18:30:05.975809 containerd[1963]: time="2025-06-20T18:30:05.975679417Z" level=info msg="metadata content store policy set" policy=shared Jun 20 18:30:05.985793 containerd[1963]: time="2025-06-20T18:30:05.985743553Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 20 18:30:05.986011 containerd[1963]: time="2025-06-20T18:30:05.985981801Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 20 18:30:05.986241 containerd[1963]: time="2025-06-20T18:30:05.986212453Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 20 18:30:05.986360 containerd[1963]: time="2025-06-20T18:30:05.986334025Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 20 18:30:05.986587 containerd[1963]: time="2025-06-20T18:30:05.986558329Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 20 18:30:05.987050 containerd[1963]: time="2025-06-20T18:30:05.987011665Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 20 18:30:05.987678 containerd[1963]: time="2025-06-20T18:30:05.987641473Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 20 18:30:05.988222 containerd[1963]: time="2025-06-20T18:30:05.988188397Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 20 18:30:05.988337 containerd[1963]: time="2025-06-20T18:30:05.988311325Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 20 18:30:05.989396 containerd[1963]: time="2025-06-20T18:30:05.988446649Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 20 18:30:05.989396 containerd[1963]: time="2025-06-20T18:30:05.988486513Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 20 18:30:05.989396 containerd[1963]: time="2025-06-20T18:30:05.988517125Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 20 18:30:05.989396 containerd[1963]: time="2025-06-20T18:30:05.988545913Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 20 18:30:05.989396 containerd[1963]: time="2025-06-20T18:30:05.988576297Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 20 18:30:05.989396 containerd[1963]: time="2025-06-20T18:30:05.988609285Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 20 18:30:05.989396 containerd[1963]: time="2025-06-20T18:30:05.988640881Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 20 18:30:05.989396 containerd[1963]: time="2025-06-20T18:30:05.988670077Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 20 18:30:05.989396 containerd[1963]: time="2025-06-20T18:30:05.988697305Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 20 18:30:05.989396 containerd[1963]: time="2025-06-20T18:30:05.988762897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 20 18:30:05.989396 containerd[1963]: time="2025-06-20T18:30:05.988798021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 20 18:30:05.989396 containerd[1963]: time="2025-06-20T18:30:05.988826425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 20 18:30:05.989396 containerd[1963]: time="2025-06-20T18:30:05.988856089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 20 18:30:05.989396 containerd[1963]: time="2025-06-20T18:30:05.988886413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 20 18:30:05.990035 containerd[1963]: time="2025-06-20T18:30:05.988916629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 20 18:30:05.990035 containerd[1963]: time="2025-06-20T18:30:05.988943797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 20 18:30:05.990035 containerd[1963]: time="2025-06-20T18:30:05.988972933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 20 18:30:05.990035 containerd[1963]: time="2025-06-20T18:30:05.989002489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 20 18:30:05.990035 containerd[1963]: time="2025-06-20T18:30:05.989034541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 20 18:30:05.990035 containerd[1963]: time="2025-06-20T18:30:05.989060965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 20 18:30:05.990035 containerd[1963]: time="2025-06-20T18:30:05.989106985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 20 18:30:05.990035 containerd[1963]: time="2025-06-20T18:30:05.989137237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 20 18:30:05.990035 containerd[1963]: time="2025-06-20T18:30:05.989168677Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 20 18:30:05.990035 containerd[1963]: time="2025-06-20T18:30:05.989209033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 20 18:30:05.990035 containerd[1963]: time="2025-06-20T18:30:05.989240905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 20 18:30:05.990035 containerd[1963]: time="2025-06-20T18:30:05.989267749Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 20 18:30:05.991060 containerd[1963]: time="2025-06-20T18:30:05.990720685Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 20 18:30:05.991060 containerd[1963]: time="2025-06-20T18:30:05.990884089Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 20 18:30:05.991060 containerd[1963]: time="2025-06-20T18:30:05.990923173Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 20 18:30:05.991060 containerd[1963]: time="2025-06-20T18:30:05.990957385Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 20 18:30:05.991060 containerd[1963]: time="2025-06-20T18:30:05.990980977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 20 18:30:05.991060 containerd[1963]: time="2025-06-20T18:30:05.991009045Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 20 18:30:05.991753 containerd[1963]: time="2025-06-20T18:30:05.991031533Z" level=info msg="NRI interface is disabled by configuration." Jun 20 18:30:05.991753 containerd[1963]: time="2025-06-20T18:30:05.991487785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 20 18:30:05.993178 containerd[1963]: time="2025-06-20T18:30:05.992157157Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 20 18:30:05.993178 containerd[1963]: time="2025-06-20T18:30:05.992251213Z" level=info msg="Connect containerd service" Jun 20 18:30:05.993178 containerd[1963]: time="2025-06-20T18:30:05.992315977Z" level=info msg="using legacy CRI server" Jun 20 18:30:05.993178 containerd[1963]: time="2025-06-20T18:30:05.992334541Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 18:30:05.993178 containerd[1963]: time="2025-06-20T18:30:05.992547385Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 20 18:30:05.994473 containerd[1963]: time="2025-06-20T18:30:05.994427293Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:30:05.995343 containerd[1963]: time="2025-06-20T18:30:05.995289553Z" level=info msg="Start subscribing containerd event" Jun 20 18:30:05.995490 containerd[1963]: time="2025-06-20T18:30:05.995463829Z" level=info msg="Start recovering state" Jun 20 18:30:05.995799 containerd[1963]: time="2025-06-20T18:30:05.995773849Z" level=info msg="Start event monitor" Jun 20 18:30:05.995899 containerd[1963]: time="2025-06-20T18:30:05.995874841Z" level=info msg="Start snapshots syncer" Jun 20 18:30:05.996123 containerd[1963]: time="2025-06-20T18:30:05.996096049Z" level=info msg="Start cni network conf syncer for default" Jun 20 18:30:05.996419 containerd[1963]: time="2025-06-20T18:30:05.996196885Z" level=info msg="Start streaming server" Jun 20 18:30:05.996606 containerd[1963]: time="2025-06-20T18:30:05.996576793Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 18:30:05.996805 containerd[1963]: time="2025-06-20T18:30:05.996779557Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 18:30:05.997251 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 18:30:05.999137 containerd[1963]: time="2025-06-20T18:30:05.998483845Z" level=info msg="containerd successfully booted in 0.156099s" Jun 20 18:30:06.400949 systemd-networkd[1860]: eth0: Gained IPv6LL Jun 20 18:30:06.410608 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 18:30:06.417610 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 18:30:06.429222 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jun 20 18:30:06.434630 tar[1940]: linux-arm64/README.md Jun 20 18:30:06.443784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:30:06.452205 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 18:30:06.505187 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 18:30:06.527365 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 18:30:06.567289 amazon-ssm-agent[2124]: Initializing new seelog logger Jun 20 18:30:06.567802 amazon-ssm-agent[2124]: New Seelog Logger Creation Complete Jun 20 18:30:06.568015 amazon-ssm-agent[2124]: 2025/06/20 18:30:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:30:06.568093 amazon-ssm-agent[2124]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:30:06.568808 amazon-ssm-agent[2124]: 2025/06/20 18:30:06 processing appconfig overrides Jun 20 18:30:06.569703 amazon-ssm-agent[2124]: 2025/06/20 18:30:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:30:06.569820 amazon-ssm-agent[2124]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:30:06.570026 amazon-ssm-agent[2124]: 2025/06/20 18:30:06 processing appconfig overrides Jun 20 18:30:06.570381 amazon-ssm-agent[2124]: 2025/06/20 18:30:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:30:06.571468 amazon-ssm-agent[2124]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:30:06.571468 amazon-ssm-agent[2124]: 2025/06/20 18:30:06 processing appconfig overrides Jun 20 18:30:06.571468 amazon-ssm-agent[2124]: 2025-06-20 18:30:06 INFO Proxy environment variables: Jun 20 18:30:06.575763 amazon-ssm-agent[2124]: 2025/06/20 18:30:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:30:06.575763 amazon-ssm-agent[2124]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:30:06.575763 amazon-ssm-agent[2124]: 2025/06/20 18:30:06 processing appconfig overrides Jun 20 18:30:06.670629 amazon-ssm-agent[2124]: 2025-06-20 18:30:06 INFO https_proxy: Jun 20 18:30:06.768462 amazon-ssm-agent[2124]: 2025-06-20 18:30:06 INFO http_proxy: Jun 20 18:30:06.867105 amazon-ssm-agent[2124]: 2025-06-20 18:30:06 INFO no_proxy: Jun 20 18:30:06.965590 amazon-ssm-agent[2124]: 2025-06-20 18:30:06 INFO Checking if agent identity type OnPrem can be assumed Jun 20 18:30:07.046007 sshd_keygen[1975]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 18:30:07.063939 amazon-ssm-agent[2124]: 2025-06-20 18:30:06 INFO Checking if agent identity type EC2 can be assumed Jun 20 18:30:07.122814 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 18:30:07.136143 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 18:30:07.148168 systemd[1]: Started sshd@0-172.31.22.87:22-147.75.109.163:36086.service - OpenSSH per-connection server daemon (147.75.109.163:36086). Jun 20 18:30:07.163955 amazon-ssm-agent[2124]: 2025-06-20 18:30:06 INFO Agent will take identity from EC2 Jun 20 18:30:07.168420 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 18:30:07.168895 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 18:30:07.180337 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 18:30:07.202236 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 18:30:07.218222 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 18:30:07.235142 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 18:30:07.238167 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 18:30:07.263519 amazon-ssm-agent[2124]: 2025-06-20 18:30:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 20 18:30:07.362828 amazon-ssm-agent[2124]: 2025-06-20 18:30:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 20 18:30:07.450701 amazon-ssm-agent[2124]: 2025-06-20 18:30:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 20 18:30:07.450701 amazon-ssm-agent[2124]: 2025-06-20 18:30:06 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jun 20 18:30:07.450701 amazon-ssm-agent[2124]: 2025-06-20 18:30:06 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jun 20 18:30:07.450701 amazon-ssm-agent[2124]: 2025-06-20 18:30:06 INFO [amazon-ssm-agent] Starting Core Agent Jun 20 18:30:07.450701 amazon-ssm-agent[2124]: 2025-06-20 18:30:06 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jun 20 18:30:07.450701 amazon-ssm-agent[2124]: 2025-06-20 18:30:06 INFO [Registrar] Starting registrar module Jun 20 18:30:07.450701 amazon-ssm-agent[2124]: 2025-06-20 18:30:06 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jun 20 18:30:07.450701 amazon-ssm-agent[2124]: 2025-06-20 18:30:07 INFO [EC2Identity] EC2 registration was successful. Jun 20 18:30:07.450701 amazon-ssm-agent[2124]: 2025-06-20 18:30:07 INFO [CredentialRefresher] credentialRefresher has started Jun 20 18:30:07.450701 amazon-ssm-agent[2124]: 2025-06-20 18:30:07 INFO [CredentialRefresher] Starting credentials refresher loop Jun 20 18:30:07.450701 amazon-ssm-agent[2124]: 2025-06-20 18:30:07 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jun 20 18:30:07.462270 amazon-ssm-agent[2124]: 2025-06-20 18:30:07 INFO [CredentialRefresher] Next credential rotation will be in 31.5499849927 minutes Jun 20 18:30:07.500492 sshd[2155]: Accepted publickey for core from 147.75.109.163 port 36086 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:30:07.504302 sshd-session[2155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:30:07.516147 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 18:30:07.527237 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 18:30:07.554251 systemd-logind[1929]: New session 1 of user core. Jun 20 18:30:07.565822 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 18:30:07.579247 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 18:30:07.604884 (systemd)[2166]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 18:30:07.610806 systemd-logind[1929]: New session c1 of user core. Jun 20 18:30:07.908159 systemd[2166]: Queued start job for default target default.target. Jun 20 18:30:07.919830 systemd[2166]: Created slice app.slice - User Application Slice. Jun 20 18:30:07.919892 systemd[2166]: Reached target paths.target - Paths. Jun 20 18:30:07.919977 systemd[2166]: Reached target timers.target - Timers. Jun 20 18:30:07.923184 systemd[2166]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 18:30:07.957116 systemd[2166]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 18:30:07.957363 systemd[2166]: Reached target sockets.target - Sockets. Jun 20 18:30:07.957841 systemd[2166]: Reached target basic.target - Basic System. Jun 20 18:30:07.958002 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 18:30:07.958687 systemd[2166]: Reached target default.target - Main User Target. Jun 20 18:30:07.959395 systemd[2166]: Startup finished in 332ms. Jun 20 18:30:07.966476 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 18:30:08.126296 systemd[1]: Started sshd@1-172.31.22.87:22-147.75.109.163:49218.service - OpenSSH per-connection server daemon (147.75.109.163:49218). Jun 20 18:30:08.203969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:30:08.208837 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 18:30:08.211382 systemd[1]: Startup finished in 1.093s (kernel) + 9.225s (initrd) + 9.250s (userspace) = 19.569s. Jun 20 18:30:08.220394 (kubelet)[2184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:30:08.317781 sshd[2177]: Accepted publickey for core from 147.75.109.163 port 49218 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:30:08.320363 sshd-session[2177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:30:08.330801 systemd-logind[1929]: New session 2 of user core. Jun 20 18:30:08.338067 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 18:30:08.462972 sshd[2189]: Connection closed by 147.75.109.163 port 49218 Jun 20 18:30:08.465013 sshd-session[2177]: pam_unix(sshd:session): session closed for user core Jun 20 18:30:08.477134 systemd[1]: sshd@1-172.31.22.87:22-147.75.109.163:49218.service: Deactivated successfully. Jun 20 18:30:08.482143 systemd[1]: session-2.scope: Deactivated successfully. Jun 20 18:30:08.487482 systemd-logind[1929]: Session 2 logged out. Waiting for processes to exit. Jun 20 18:30:08.490450 amazon-ssm-agent[2124]: 2025-06-20 18:30:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jun 20 18:30:08.514347 systemd[1]: Started sshd@2-172.31.22.87:22-147.75.109.163:49220.service - OpenSSH per-connection server daemon (147.75.109.163:49220). Jun 20 18:30:08.517214 systemd-logind[1929]: Removed session 2. Jun 20 18:30:08.593033 amazon-ssm-agent[2124]: 2025-06-20 18:30:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2198) started Jun 20 18:30:08.699241 amazon-ssm-agent[2124]: 2025-06-20 18:30:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jun 20 18:30:08.718884 sshd[2200]: Accepted publickey for core from 147.75.109.163 port 49220 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:30:08.723695 sshd-session[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:30:08.736446 systemd-logind[1929]: New session 3 of user core. Jun 20 18:30:08.744082 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 18:30:08.826706 ntpd[1922]: Listen normally on 7 eth0 [fe80::479:89ff:fea2:9a43%2]:123 Jun 20 18:30:08.827148 ntpd[1922]: 20 Jun 18:30:08 ntpd[1922]: Listen normally on 7 eth0 [fe80::479:89ff:fea2:9a43%2]:123 Jun 20 18:30:08.863762 sshd[2213]: Connection closed by 147.75.109.163 port 49220 Jun 20 18:30:08.864185 sshd-session[2200]: pam_unix(sshd:session): session closed for user core Jun 20 18:30:08.873617 systemd[1]: sshd@2-172.31.22.87:22-147.75.109.163:49220.service: Deactivated successfully. Jun 20 18:30:08.877466 systemd[1]: session-3.scope: Deactivated successfully. Jun 20 18:30:08.879408 systemd-logind[1929]: Session 3 logged out. Waiting for processes to exit. Jun 20 18:30:08.882477 systemd-logind[1929]: Removed session 3. Jun 20 18:30:08.901371 systemd[1]: Started sshd@3-172.31.22.87:22-147.75.109.163:49230.service - OpenSSH per-connection server daemon (147.75.109.163:49230). Jun 20 18:30:09.142808 sshd[2219]: Accepted publickey for core from 147.75.109.163 port 49230 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:30:09.145931 sshd-session[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:30:09.158335 systemd-logind[1929]: New session 4 of user core. Jun 20 18:30:09.165041 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 18:30:09.279020 kubelet[2184]: E0620 18:30:09.278933 2184 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:30:09.282554 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:30:09.282964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:30:09.283673 systemd[1]: kubelet.service: Consumed 1.437s CPU time, 261.3M memory peak. Jun 20 18:30:09.301803 sshd[2222]: Connection closed by 147.75.109.163 port 49230 Jun 20 18:30:09.302596 sshd-session[2219]: pam_unix(sshd:session): session closed for user core Jun 20 18:30:09.309266 systemd[1]: sshd@3-172.31.22.87:22-147.75.109.163:49230.service: Deactivated successfully. Jun 20 18:30:09.313344 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 18:30:09.314916 systemd-logind[1929]: Session 4 logged out. Waiting for processes to exit. Jun 20 18:30:09.317595 systemd-logind[1929]: Removed session 4. Jun 20 18:30:09.342268 systemd[1]: Started sshd@4-172.31.22.87:22-147.75.109.163:49240.service - OpenSSH per-connection server daemon (147.75.109.163:49240). Jun 20 18:30:09.522530 sshd[2230]: Accepted publickey for core from 147.75.109.163 port 49240 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:30:09.525489 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:30:09.535099 systemd-logind[1929]: New session 5 of user core. Jun 20 18:30:09.542974 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 18:30:09.699113 sudo[2233]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 18:30:09.699783 sudo[2233]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:30:09.718432 sudo[2233]: pam_unix(sudo:session): session closed for user root Jun 20 18:30:09.741357 sshd[2232]: Connection closed by 147.75.109.163 port 49240 Jun 20 18:30:09.742416 sshd-session[2230]: pam_unix(sshd:session): session closed for user core Jun 20 18:30:09.749669 systemd[1]: sshd@4-172.31.22.87:22-147.75.109.163:49240.service: Deactivated successfully. Jun 20 18:30:09.753421 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 18:30:09.755412 systemd-logind[1929]: Session 5 logged out. Waiting for processes to exit. Jun 20 18:30:09.757617 systemd-logind[1929]: Removed session 5. Jun 20 18:30:09.784237 systemd[1]: Started sshd@5-172.31.22.87:22-147.75.109.163:49248.service - OpenSSH per-connection server daemon (147.75.109.163:49248). Jun 20 18:30:09.978992 sshd[2239]: Accepted publickey for core from 147.75.109.163 port 49248 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:30:09.981434 sshd-session[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:30:09.989243 systemd-logind[1929]: New session 6 of user core. Jun 20 18:30:10.001019 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 18:30:10.103535 sudo[2243]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 18:30:10.104525 sudo[2243]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:30:10.110448 sudo[2243]: pam_unix(sudo:session): session closed for user root Jun 20 18:30:10.120508 sudo[2242]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 18:30:10.121669 sudo[2242]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:30:10.151396 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:30:10.202590 augenrules[2265]: No rules Jun 20 18:30:10.204867 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:30:10.205382 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:30:10.208076 sudo[2242]: pam_unix(sudo:session): session closed for user root Jun 20 18:30:10.231055 sshd[2241]: Connection closed by 147.75.109.163 port 49248 Jun 20 18:30:10.232062 sshd-session[2239]: pam_unix(sshd:session): session closed for user core Jun 20 18:30:10.237270 systemd-logind[1929]: Session 6 logged out. Waiting for processes to exit. Jun 20 18:30:10.239483 systemd[1]: sshd@5-172.31.22.87:22-147.75.109.163:49248.service: Deactivated successfully. Jun 20 18:30:10.242536 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 18:30:10.244235 systemd-logind[1929]: Removed session 6. Jun 20 18:30:10.277185 systemd[1]: Started sshd@6-172.31.22.87:22-147.75.109.163:49252.service - OpenSSH per-connection server daemon (147.75.109.163:49252). Jun 20 18:30:10.460948 sshd[2274]: Accepted publickey for core from 147.75.109.163 port 49252 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:30:10.463958 sshd-session[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:30:10.472568 systemd-logind[1929]: New session 7 of user core. Jun 20 18:30:10.478990 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 18:30:10.584269 sudo[2277]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 18:30:10.584928 sudo[2277]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:30:11.362176 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 18:30:11.362350 (dockerd)[2293]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 18:30:12.036738 systemd-resolved[1862]: Clock change detected. Flushing caches. Jun 20 18:30:12.044963 dockerd[2293]: time="2025-06-20T18:30:12.044833460Z" level=info msg="Starting up" Jun 20 18:30:12.371152 dockerd[2293]: time="2025-06-20T18:30:12.370701657Z" level=info msg="Loading containers: start." Jun 20 18:30:12.630681 kernel: Initializing XFRM netlink socket Jun 20 18:30:12.687709 (udev-worker)[2318]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:30:12.775826 systemd-networkd[1860]: docker0: Link UP Jun 20 18:30:12.819051 dockerd[2293]: time="2025-06-20T18:30:12.818983907Z" level=info msg="Loading containers: done." Jun 20 18:30:12.841360 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3259217625-merged.mount: Deactivated successfully. Jun 20 18:30:12.844188 dockerd[2293]: time="2025-06-20T18:30:12.844111920Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 18:30:12.844329 dockerd[2293]: time="2025-06-20T18:30:12.844252248Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jun 20 18:30:12.844505 dockerd[2293]: time="2025-06-20T18:30:12.844472508Z" level=info msg="Daemon has completed initialization" Jun 20 18:30:12.893417 dockerd[2293]: time="2025-06-20T18:30:12.893207208Z" level=info msg="API listen on /run/docker.sock" Jun 20 18:30:12.893795 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 18:30:13.779501 containerd[1963]: time="2025-06-20T18:30:13.779431740Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jun 20 18:30:14.336560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount765771244.mount: Deactivated successfully. Jun 20 18:30:15.699690 containerd[1963]: time="2025-06-20T18:30:15.699479330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:15.701841 containerd[1963]: time="2025-06-20T18:30:15.701760878Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351716" Jun 20 18:30:15.704564 containerd[1963]: time="2025-06-20T18:30:15.704494622Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:15.710549 containerd[1963]: time="2025-06-20T18:30:15.710453270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:15.715120 containerd[1963]: time="2025-06-20T18:30:15.713521934Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.934021146s" Jun 20 18:30:15.715120 containerd[1963]: time="2025-06-20T18:30:15.713589686Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jun 20 18:30:15.718129 containerd[1963]: time="2025-06-20T18:30:15.718076222Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jun 20 18:30:17.142312 containerd[1963]: time="2025-06-20T18:30:17.142236433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:17.143883 containerd[1963]: time="2025-06-20T18:30:17.143758381Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537623" Jun 20 18:30:17.144870 containerd[1963]: time="2025-06-20T18:30:17.144811069Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:17.151453 containerd[1963]: time="2025-06-20T18:30:17.151397365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:17.157132 containerd[1963]: time="2025-06-20T18:30:17.156613105Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.438292575s" Jun 20 18:30:17.157132 containerd[1963]: time="2025-06-20T18:30:17.156715741Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jun 20 18:30:17.160581 containerd[1963]: time="2025-06-20T18:30:17.160522045Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jun 20 18:30:18.445794 containerd[1963]: time="2025-06-20T18:30:18.445711095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:18.446968 containerd[1963]: time="2025-06-20T18:30:18.446894511Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293515" Jun 20 18:30:18.448602 containerd[1963]: time="2025-06-20T18:30:18.448505247Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:18.453944 containerd[1963]: time="2025-06-20T18:30:18.453894435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:18.456395 containerd[1963]: time="2025-06-20T18:30:18.456223539Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.29564189s" Jun 20 18:30:18.456395 containerd[1963]: time="2025-06-20T18:30:18.456272355Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jun 20 18:30:18.457242 containerd[1963]: time="2025-06-20T18:30:18.456959031Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jun 20 18:30:19.742394 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 18:30:19.751047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:30:19.782517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3183671252.mount: Deactivated successfully. Jun 20 18:30:20.127998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:30:20.138645 (kubelet)[2563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:30:20.224563 kubelet[2563]: E0620 18:30:20.224350 2563 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:30:20.234140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:30:20.234499 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:30:20.236073 systemd[1]: kubelet.service: Consumed 311ms CPU time, 106.7M memory peak. Jun 20 18:30:20.582588 containerd[1963]: time="2025-06-20T18:30:20.582416886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:20.584890 containerd[1963]: time="2025-06-20T18:30:20.584805918Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199472" Jun 20 18:30:20.587323 containerd[1963]: time="2025-06-20T18:30:20.587235450Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:20.592076 containerd[1963]: time="2025-06-20T18:30:20.591981618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:20.593502 containerd[1963]: time="2025-06-20T18:30:20.593268258Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 2.136258071s" Jun 20 18:30:20.593502 containerd[1963]: time="2025-06-20T18:30:20.593321418Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jun 20 18:30:20.594686 containerd[1963]: time="2025-06-20T18:30:20.594000270Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jun 20 18:30:21.136321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2490837433.mount: Deactivated successfully. Jun 20 18:30:22.378596 containerd[1963]: time="2025-06-20T18:30:22.375712807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:22.378596 containerd[1963]: time="2025-06-20T18:30:22.378493231Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jun 20 18:30:22.380859 containerd[1963]: time="2025-06-20T18:30:22.380785723Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:22.392728 containerd[1963]: time="2025-06-20T18:30:22.392657923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:22.397392 containerd[1963]: time="2025-06-20T18:30:22.397110031Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.803048441s" Jun 20 18:30:22.397392 containerd[1963]: time="2025-06-20T18:30:22.397173595Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jun 20 18:30:22.398912 containerd[1963]: time="2025-06-20T18:30:22.398352811Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 18:30:22.949533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4027463162.mount: Deactivated successfully. Jun 20 18:30:22.962209 containerd[1963]: time="2025-06-20T18:30:22.962134210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:22.965243 containerd[1963]: time="2025-06-20T18:30:22.965164402Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jun 20 18:30:22.967807 containerd[1963]: time="2025-06-20T18:30:22.967750666Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:22.974142 containerd[1963]: time="2025-06-20T18:30:22.974056762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:22.976017 containerd[1963]: time="2025-06-20T18:30:22.975818638Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 577.407075ms" Jun 20 18:30:22.976017 containerd[1963]: time="2025-06-20T18:30:22.975881254Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jun 20 18:30:22.976671 containerd[1963]: time="2025-06-20T18:30:22.976574410Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jun 20 18:30:23.526737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1443523895.mount: Deactivated successfully. Jun 20 18:30:25.637911 containerd[1963]: time="2025-06-20T18:30:25.637838963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:25.683470 containerd[1963]: time="2025-06-20T18:30:25.683373311Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Jun 20 18:30:25.722659 containerd[1963]: time="2025-06-20T18:30:25.721929515Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:25.780750 containerd[1963]: time="2025-06-20T18:30:25.780683280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:25.783610 containerd[1963]: time="2025-06-20T18:30:25.783546840Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.806923062s" Jun 20 18:30:25.783610 containerd[1963]: time="2025-06-20T18:30:25.783604272Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jun 20 18:30:30.280168 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 18:30:30.291963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:30:30.629889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:30:30.644510 (kubelet)[2710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:30:30.716541 kubelet[2710]: E0620 18:30:30.716481 2710 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:30:30.720213 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:30:30.720523 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:30:30.721329 systemd[1]: kubelet.service: Consumed 274ms CPU time, 105.2M memory peak. Jun 20 18:30:34.525705 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:30:34.526964 systemd[1]: kubelet.service: Consumed 274ms CPU time, 105.2M memory peak. Jun 20 18:30:34.540100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:30:34.597919 systemd[1]: Reload requested from client PID 2725 ('systemctl') (unit session-7.scope)... Jun 20 18:30:34.598125 systemd[1]: Reloading... Jun 20 18:30:34.835667 zram_generator::config[2774]: No configuration found. Jun 20 18:30:35.083238 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:30:35.308826 systemd[1]: Reloading finished in 709 ms. Jun 20 18:30:35.397129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:30:35.410409 (kubelet)[2825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:30:35.413358 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:30:35.414240 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:30:35.414832 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:30:35.414911 systemd[1]: kubelet.service: Consumed 224ms CPU time, 95M memory peak. Jun 20 18:30:35.423218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:30:35.742914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:30:35.744362 (kubelet)[2837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:30:35.815676 kubelet[2837]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:30:35.815676 kubelet[2837]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 18:30:35.815676 kubelet[2837]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:30:35.815676 kubelet[2837]: I0620 18:30:35.813859 2837 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:30:36.203336 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 20 18:30:37.043273 kubelet[2837]: I0620 18:30:37.043205 2837 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 18:30:37.043273 kubelet[2837]: I0620 18:30:37.043254 2837 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:30:37.043941 kubelet[2837]: I0620 18:30:37.043724 2837 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 18:30:37.081060 kubelet[2837]: E0620 18:30:37.080977 2837 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.22.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.87:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jun 20 18:30:37.085233 kubelet[2837]: I0620 18:30:37.085074 2837 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:30:37.101677 kubelet[2837]: E0620 18:30:37.100660 2837 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 18:30:37.101677 kubelet[2837]: I0620 18:30:37.100727 2837 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 18:30:37.106423 kubelet[2837]: I0620 18:30:37.106386 2837 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:30:37.107119 kubelet[2837]: I0620 18:30:37.107078 2837 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:30:37.107496 kubelet[2837]: I0620 18:30:37.107238 2837 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-87","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:30:37.107778 kubelet[2837]: I0620 18:30:37.107755 2837 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:30:37.107886 kubelet[2837]: I0620 18:30:37.107868 2837 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 18:30:37.110608 kubelet[2837]: I0620 18:30:37.110578 2837 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:30:37.116592 kubelet[2837]: I0620 18:30:37.116557 2837 kubelet.go:480] "Attempting to sync node with API server" Jun 20 18:30:37.116791 kubelet[2837]: I0620 18:30:37.116769 2837 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:30:37.116925 kubelet[2837]: I0620 18:30:37.116907 2837 kubelet.go:386] "Adding apiserver pod source" Jun 20 18:30:37.117032 kubelet[2837]: I0620 18:30:37.117013 2837 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:30:37.124195 kubelet[2837]: E0620 18:30:37.124133 2837 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.22.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-87&limit=500&resourceVersion=0\": dial tcp 172.31.22.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 18:30:37.124342 kubelet[2837]: I0620 18:30:37.124312 2837 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 18:30:37.125371 kubelet[2837]: I0620 18:30:37.125313 2837 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 18:30:37.125492 kubelet[2837]: W0620 18:30:37.125447 2837 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 18:30:37.131396 kubelet[2837]: I0620 18:30:37.131339 2837 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 18:30:37.131537 kubelet[2837]: I0620 18:30:37.131414 2837 server.go:1289] "Started kubelet" Jun 20 18:30:37.133687 kubelet[2837]: E0620 18:30:37.133477 2837 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.22.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 18:30:37.133687 kubelet[2837]: I0620 18:30:37.133557 2837 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:30:37.139478 kubelet[2837]: I0620 18:30:37.137023 2837 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:30:37.139478 kubelet[2837]: I0620 18:30:37.137337 2837 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:30:37.139478 kubelet[2837]: I0620 18:30:37.137419 2837 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:30:37.144104 kubelet[2837]: E0620 18:30:37.141863 2837 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.87:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.87:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-87.184ad3c269585a78 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-87,UID:ip-172-31-22-87,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-87,},FirstTimestamp:2025-06-20 18:30:37.131373176 +0000 UTC m=+1.377219296,LastTimestamp:2025-06-20 18:30:37.131373176 +0000 UTC m=+1.377219296,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-87,}" Jun 20 18:30:37.146956 kubelet[2837]: I0620 18:30:37.146419 2837 server.go:317] "Adding debug handlers to kubelet server" Jun 20 18:30:37.149711 kubelet[2837]: I0620 18:30:37.149615 2837 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 18:30:37.150150 kubelet[2837]: E0620 18:30:37.150101 2837 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-87\" not found" Jun 20 18:30:37.150531 kubelet[2837]: I0620 18:30:37.150501 2837 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:30:37.152441 kubelet[2837]: I0620 18:30:37.152379 2837 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 18:30:37.152591 kubelet[2837]: I0620 18:30:37.152486 2837 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:30:37.159916 kubelet[2837]: E0620 18:30:37.159871 2837 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.22.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 18:30:37.161260 kubelet[2837]: E0620 18:30:37.161186 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-87?timeout=10s\": dial tcp 172.31.22.87:6443: connect: connection refused" interval="200ms" Jun 20 18:30:37.164370 kubelet[2837]: I0620 18:30:37.164305 2837 factory.go:223] Registration of the systemd container factory successfully Jun 20 18:30:37.166315 kubelet[2837]: I0620 18:30:37.166245 2837 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:30:37.168280 kubelet[2837]: E0620 18:30:37.167719 2837 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:30:37.170033 kubelet[2837]: I0620 18:30:37.169986 2837 factory.go:223] Registration of the containerd container factory successfully Jun 20 18:30:37.190945 kubelet[2837]: I0620 18:30:37.190827 2837 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 18:30:37.194850 kubelet[2837]: I0620 18:30:37.194624 2837 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 18:30:37.194850 kubelet[2837]: I0620 18:30:37.194696 2837 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 18:30:37.194850 kubelet[2837]: I0620 18:30:37.194729 2837 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 18:30:37.194850 kubelet[2837]: I0620 18:30:37.194758 2837 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 18:30:37.195114 kubelet[2837]: E0620 18:30:37.194849 2837 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:30:37.201074 kubelet[2837]: E0620 18:30:37.200916 2837 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.22.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 18:30:37.203596 kubelet[2837]: I0620 18:30:37.202824 2837 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 18:30:37.203596 kubelet[2837]: I0620 18:30:37.202863 2837 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 18:30:37.203596 kubelet[2837]: I0620 18:30:37.202892 2837 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:30:37.209977 kubelet[2837]: I0620 18:30:37.209923 2837 policy_none.go:49] "None policy: Start" Jun 20 18:30:37.209977 kubelet[2837]: I0620 18:30:37.209970 2837 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 18:30:37.210159 kubelet[2837]: I0620 18:30:37.209996 2837 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:30:37.222271 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 18:30:37.239680 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 18:30:37.245798 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 18:30:37.250726 kubelet[2837]: E0620 18:30:37.250668 2837 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-87\" not found" Jun 20 18:30:37.259713 kubelet[2837]: E0620 18:30:37.259655 2837 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 18:30:37.259989 kubelet[2837]: I0620 18:30:37.259951 2837 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:30:37.260095 kubelet[2837]: I0620 18:30:37.259983 2837 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:30:37.260708 kubelet[2837]: I0620 18:30:37.260563 2837 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:30:37.263669 kubelet[2837]: E0620 18:30:37.263549 2837 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 18:30:37.263669 kubelet[2837]: E0620 18:30:37.263621 2837 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-87\" not found" Jun 20 18:30:37.318166 systemd[1]: Created slice kubepods-burstable-pod9b2a454eea0b9255d2b9c9fd08e834bc.slice - libcontainer container kubepods-burstable-pod9b2a454eea0b9255d2b9c9fd08e834bc.slice. Jun 20 18:30:37.331581 kubelet[2837]: E0620 18:30:37.331217 2837 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-87\" not found" node="ip-172-31-22-87" Jun 20 18:30:37.341658 systemd[1]: Created slice kubepods-burstable-pod28e425d89e1341c0780b92282f49a9e6.slice - libcontainer container kubepods-burstable-pod28e425d89e1341c0780b92282f49a9e6.slice. Jun 20 18:30:37.346715 kubelet[2837]: E0620 18:30:37.346672 2837 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-87\" not found" node="ip-172-31-22-87" Jun 20 18:30:37.351397 systemd[1]: Created slice kubepods-burstable-podbd643eca72d047add3d6be450d379d9b.slice - libcontainer container kubepods-burstable-podbd643eca72d047add3d6be450d379d9b.slice. Jun 20 18:30:37.353258 kubelet[2837]: I0620 18:30:37.352862 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b2a454eea0b9255d2b9c9fd08e834bc-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-87\" (UID: \"9b2a454eea0b9255d2b9c9fd08e834bc\") " pod="kube-system/kube-apiserver-ip-172-31-22-87" Jun 20 18:30:37.353258 kubelet[2837]: I0620 18:30:37.352931 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/28e425d89e1341c0780b92282f49a9e6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-87\" (UID: \"28e425d89e1341c0780b92282f49a9e6\") " pod="kube-system/kube-controller-manager-ip-172-31-22-87" Jun 20 18:30:37.353258 kubelet[2837]: I0620 18:30:37.352979 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28e425d89e1341c0780b92282f49a9e6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-87\" (UID: \"28e425d89e1341c0780b92282f49a9e6\") " pod="kube-system/kube-controller-manager-ip-172-31-22-87" Jun 20 18:30:37.353258 kubelet[2837]: I0620 18:30:37.353019 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd643eca72d047add3d6be450d379d9b-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-87\" (UID: \"bd643eca72d047add3d6be450d379d9b\") " pod="kube-system/kube-scheduler-ip-172-31-22-87" Jun 20 18:30:37.353258 kubelet[2837]: I0620 18:30:37.353053 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b2a454eea0b9255d2b9c9fd08e834bc-ca-certs\") pod \"kube-apiserver-ip-172-31-22-87\" (UID: \"9b2a454eea0b9255d2b9c9fd08e834bc\") " pod="kube-system/kube-apiserver-ip-172-31-22-87" Jun 20 18:30:37.353691 kubelet[2837]: I0620 18:30:37.353094 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b2a454eea0b9255d2b9c9fd08e834bc-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-87\" (UID: \"9b2a454eea0b9255d2b9c9fd08e834bc\") " pod="kube-system/kube-apiserver-ip-172-31-22-87" Jun 20 18:30:37.353691 kubelet[2837]: I0620 18:30:37.353129 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28e425d89e1341c0780b92282f49a9e6-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-87\" (UID: \"28e425d89e1341c0780b92282f49a9e6\") " pod="kube-system/kube-controller-manager-ip-172-31-22-87" Jun 20 18:30:37.353691 kubelet[2837]: I0620 18:30:37.353170 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/28e425d89e1341c0780b92282f49a9e6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-87\" (UID: \"28e425d89e1341c0780b92282f49a9e6\") " pod="kube-system/kube-controller-manager-ip-172-31-22-87" Jun 20 18:30:37.353691 kubelet[2837]: I0620 18:30:37.353204 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/28e425d89e1341c0780b92282f49a9e6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-87\" (UID: \"28e425d89e1341c0780b92282f49a9e6\") " pod="kube-system/kube-controller-manager-ip-172-31-22-87" Jun 20 18:30:37.355481 kubelet[2837]: E0620 18:30:37.355437 2837 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-87\" not found" node="ip-172-31-22-87" Jun 20 18:30:37.362345 kubelet[2837]: I0620 18:30:37.362304 2837 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-87" Jun 20 18:30:37.363071 kubelet[2837]: E0620 18:30:37.362874 2837 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.87:6443/api/v1/nodes\": dial tcp 172.31.22.87:6443: connect: connection refused" node="ip-172-31-22-87" Jun 20 18:30:37.363071 kubelet[2837]: E0620 18:30:37.362996 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-87?timeout=10s\": dial tcp 172.31.22.87:6443: connect: connection refused" interval="400ms" Jun 20 18:30:37.565692 kubelet[2837]: I0620 18:30:37.565621 2837 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-87" Jun 20 18:30:37.566290 kubelet[2837]: E0620 18:30:37.566242 2837 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.87:6443/api/v1/nodes\": dial tcp 172.31.22.87:6443: connect: connection refused" node="ip-172-31-22-87" Jun 20 18:30:37.632943 containerd[1963]: time="2025-06-20T18:30:37.632813063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-87,Uid:9b2a454eea0b9255d2b9c9fd08e834bc,Namespace:kube-system,Attempt:0,}" Jun 20 18:30:37.648770 containerd[1963]: time="2025-06-20T18:30:37.648715151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-87,Uid:28e425d89e1341c0780b92282f49a9e6,Namespace:kube-system,Attempt:0,}" Jun 20 18:30:37.657601 containerd[1963]: time="2025-06-20T18:30:37.657533327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-87,Uid:bd643eca72d047add3d6be450d379d9b,Namespace:kube-system,Attempt:0,}" Jun 20 18:30:37.764350 kubelet[2837]: E0620 18:30:37.764280 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-87?timeout=10s\": dial tcp 172.31.22.87:6443: connect: connection refused" interval="800ms" Jun 20 18:30:37.968116 kubelet[2837]: E0620 18:30:37.967941 2837 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.22.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 18:30:37.968944 kubelet[2837]: I0620 18:30:37.968894 2837 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-87" Jun 20 18:30:37.969930 kubelet[2837]: E0620 18:30:37.969878 2837 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.87:6443/api/v1/nodes\": dial tcp 172.31.22.87:6443: connect: connection refused" node="ip-172-31-22-87" Jun 20 18:30:38.075590 kubelet[2837]: E0620 18:30:38.075508 2837 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.22.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 18:30:38.133040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2815448583.mount: Deactivated successfully. Jun 20 18:30:38.148371 containerd[1963]: time="2025-06-20T18:30:38.148287333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:30:38.157206 containerd[1963]: time="2025-06-20T18:30:38.157091181Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jun 20 18:30:38.160119 containerd[1963]: time="2025-06-20T18:30:38.159007701Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:30:38.162058 containerd[1963]: time="2025-06-20T18:30:38.161823333Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:30:38.165488 containerd[1963]: time="2025-06-20T18:30:38.165412509Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:30:38.167790 containerd[1963]: time="2025-06-20T18:30:38.167701413Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 18:30:38.169905 containerd[1963]: time="2025-06-20T18:30:38.169831293Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 18:30:38.172434 containerd[1963]: time="2025-06-20T18:30:38.172229517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:30:38.177887 containerd[1963]: time="2025-06-20T18:30:38.177147969Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 519.47693ms" Jun 20 18:30:38.180354 containerd[1963]: time="2025-06-20T18:30:38.179981049Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.060682ms" Jun 20 18:30:38.182236 containerd[1963]: time="2025-06-20T18:30:38.182137965Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 533.311946ms" Jun 20 18:30:38.269031 kubelet[2837]: E0620 18:30:38.268816 2837 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.22.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 18:30:38.374349 containerd[1963]: time="2025-06-20T18:30:38.373957954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:30:38.375088 containerd[1963]: time="2025-06-20T18:30:38.374092138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:30:38.375088 containerd[1963]: time="2025-06-20T18:30:38.374576710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:30:38.377309 containerd[1963]: time="2025-06-20T18:30:38.377214658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:30:38.387619 containerd[1963]: time="2025-06-20T18:30:38.387431434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:30:38.387619 containerd[1963]: time="2025-06-20T18:30:38.387537574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:30:38.387619 containerd[1963]: time="2025-06-20T18:30:38.387576058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:30:38.388144 containerd[1963]: time="2025-06-20T18:30:38.387752926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:30:38.392230 containerd[1963]: time="2025-06-20T18:30:38.391707310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:30:38.393351 containerd[1963]: time="2025-06-20T18:30:38.393055822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:30:38.393575 containerd[1963]: time="2025-06-20T18:30:38.393322726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:30:38.394323 containerd[1963]: time="2025-06-20T18:30:38.394076902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:30:38.429018 systemd[1]: Started cri-containerd-13d691c55a5f2c4f98540ea070efb5f1d6eafb53105c8d6c39b97fcc233daf13.scope - libcontainer container 13d691c55a5f2c4f98540ea070efb5f1d6eafb53105c8d6c39b97fcc233daf13. Jun 20 18:30:38.440099 kubelet[2837]: E0620 18:30:38.439924 2837 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.22.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-87&limit=500&resourceVersion=0\": dial tcp 172.31.22.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 18:30:38.459948 systemd[1]: Started cri-containerd-a901066bd062792d97c78a98ca408d22050bce0a7ca92d1bef1635ec39289a79.scope - libcontainer container a901066bd062792d97c78a98ca408d22050bce0a7ca92d1bef1635ec39289a79. Jun 20 18:30:38.464424 systemd[1]: Started cri-containerd-bd5db25e1d8776ad9fce38dd9b44a85a163e4e4a3f2de8040b17985a4084285f.scope - libcontainer container bd5db25e1d8776ad9fce38dd9b44a85a163e4e4a3f2de8040b17985a4084285f. Jun 20 18:30:38.565529 kubelet[2837]: E0620 18:30:38.565337 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-87?timeout=10s\": dial tcp 172.31.22.87:6443: connect: connection refused" interval="1.6s" Jun 20 18:30:38.577065 containerd[1963]: time="2025-06-20T18:30:38.576870431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-87,Uid:28e425d89e1341c0780b92282f49a9e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"13d691c55a5f2c4f98540ea070efb5f1d6eafb53105c8d6c39b97fcc233daf13\"" Jun 20 18:30:38.588236 containerd[1963]: time="2025-06-20T18:30:38.588150599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-87,Uid:bd643eca72d047add3d6be450d379d9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a901066bd062792d97c78a98ca408d22050bce0a7ca92d1bef1635ec39289a79\"" Jun 20 18:30:38.592882 containerd[1963]: time="2025-06-20T18:30:38.592530587Z" level=info msg="CreateContainer within sandbox \"13d691c55a5f2c4f98540ea070efb5f1d6eafb53105c8d6c39b97fcc233daf13\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 18:30:38.595869 containerd[1963]: time="2025-06-20T18:30:38.594845003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-87,Uid:9b2a454eea0b9255d2b9c9fd08e834bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd5db25e1d8776ad9fce38dd9b44a85a163e4e4a3f2de8040b17985a4084285f\"" Jun 20 18:30:38.611271 containerd[1963]: time="2025-06-20T18:30:38.611192916Z" level=info msg="CreateContainer within sandbox \"a901066bd062792d97c78a98ca408d22050bce0a7ca92d1bef1635ec39289a79\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 18:30:38.614687 containerd[1963]: time="2025-06-20T18:30:38.614572872Z" level=info msg="CreateContainer within sandbox \"bd5db25e1d8776ad9fce38dd9b44a85a163e4e4a3f2de8040b17985a4084285f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 18:30:38.645001 containerd[1963]: time="2025-06-20T18:30:38.644924076Z" level=info msg="CreateContainer within sandbox \"13d691c55a5f2c4f98540ea070efb5f1d6eafb53105c8d6c39b97fcc233daf13\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"88b5ba4fb92ba4eb78d9558e07b765d595bcc5530a61a372bde68b9b3ccd92a0\"" Jun 20 18:30:38.646591 containerd[1963]: time="2025-06-20T18:30:38.646283460Z" level=info msg="StartContainer for \"88b5ba4fb92ba4eb78d9558e07b765d595bcc5530a61a372bde68b9b3ccd92a0\"" Jun 20 18:30:38.656596 containerd[1963]: time="2025-06-20T18:30:38.656284512Z" level=info msg="CreateContainer within sandbox \"a901066bd062792d97c78a98ca408d22050bce0a7ca92d1bef1635ec39289a79\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ed75def976827e12c2cd05ea4c1023de3f8a619cd2f47f5bb0b89d6d6d176124\"" Jun 20 18:30:38.657726 containerd[1963]: time="2025-06-20T18:30:38.657279588Z" level=info msg="StartContainer for \"ed75def976827e12c2cd05ea4c1023de3f8a619cd2f47f5bb0b89d6d6d176124\"" Jun 20 18:30:38.664078 containerd[1963]: time="2025-06-20T18:30:38.663999108Z" level=info msg="CreateContainer within sandbox \"bd5db25e1d8776ad9fce38dd9b44a85a163e4e4a3f2de8040b17985a4084285f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d0abfee326e81f473e2f1d32f289a3d6e20e399a8862002ea8a49fba54c61ce0\"" Jun 20 18:30:38.664871 containerd[1963]: time="2025-06-20T18:30:38.664808988Z" level=info msg="StartContainer for \"d0abfee326e81f473e2f1d32f289a3d6e20e399a8862002ea8a49fba54c61ce0\"" Jun 20 18:30:38.717987 systemd[1]: Started cri-containerd-88b5ba4fb92ba4eb78d9558e07b765d595bcc5530a61a372bde68b9b3ccd92a0.scope - libcontainer container 88b5ba4fb92ba4eb78d9558e07b765d595bcc5530a61a372bde68b9b3ccd92a0. Jun 20 18:30:38.747786 systemd[1]: Started cri-containerd-ed75def976827e12c2cd05ea4c1023de3f8a619cd2f47f5bb0b89d6d6d176124.scope - libcontainer container ed75def976827e12c2cd05ea4c1023de3f8a619cd2f47f5bb0b89d6d6d176124. Jun 20 18:30:38.772322 systemd[1]: Started cri-containerd-d0abfee326e81f473e2f1d32f289a3d6e20e399a8862002ea8a49fba54c61ce0.scope - libcontainer container d0abfee326e81f473e2f1d32f289a3d6e20e399a8862002ea8a49fba54c61ce0. Jun 20 18:30:38.779915 kubelet[2837]: I0620 18:30:38.779642 2837 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-87" Jun 20 18:30:38.780751 kubelet[2837]: E0620 18:30:38.780434 2837 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.87:6443/api/v1/nodes\": dial tcp 172.31.22.87:6443: connect: connection refused" node="ip-172-31-22-87" Jun 20 18:30:38.841060 containerd[1963]: time="2025-06-20T18:30:38.840762997Z" level=info msg="StartContainer for \"88b5ba4fb92ba4eb78d9558e07b765d595bcc5530a61a372bde68b9b3ccd92a0\" returns successfully" Jun 20 18:30:38.901465 containerd[1963]: time="2025-06-20T18:30:38.899874961Z" level=info msg="StartContainer for \"d0abfee326e81f473e2f1d32f289a3d6e20e399a8862002ea8a49fba54c61ce0\" returns successfully" Jun 20 18:30:38.922535 containerd[1963]: time="2025-06-20T18:30:38.922460881Z" level=info msg="StartContainer for \"ed75def976827e12c2cd05ea4c1023de3f8a619cd2f47f5bb0b89d6d6d176124\" returns successfully" Jun 20 18:30:39.216562 kubelet[2837]: E0620 18:30:39.216419 2837 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-87\" not found" node="ip-172-31-22-87" Jun 20 18:30:39.221522 kubelet[2837]: E0620 18:30:39.221163 2837 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-87\" not found" node="ip-172-31-22-87" Jun 20 18:30:39.227019 kubelet[2837]: E0620 18:30:39.226973 2837 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-87\" not found" node="ip-172-31-22-87" Jun 20 18:30:40.228981 kubelet[2837]: E0620 18:30:40.228491 2837 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-87\" not found" node="ip-172-31-22-87" Jun 20 18:30:40.228981 kubelet[2837]: E0620 18:30:40.228770 2837 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-87\" not found" node="ip-172-31-22-87" Jun 20 18:30:40.231106 kubelet[2837]: E0620 18:30:40.229674 2837 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-87\" not found" node="ip-172-31-22-87" Jun 20 18:30:40.383289 kubelet[2837]: I0620 18:30:40.383233 2837 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-87" Jun 20 18:30:41.233715 kubelet[2837]: E0620 18:30:41.232708 2837 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-87\" not found" node="ip-172-31-22-87" Jun 20 18:30:41.421680 kubelet[2837]: E0620 18:30:41.421281 2837 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-87\" not found" node="ip-172-31-22-87" Jun 20 18:30:42.135679 kubelet[2837]: I0620 18:30:42.134487 2837 apiserver.go:52] "Watching apiserver" Jun 20 18:30:42.339464 kubelet[2837]: E0620 18:30:42.339416 2837 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-87\" not found" node="ip-172-31-22-87" Jun 20 18:30:42.353545 kubelet[2837]: I0620 18:30:42.353455 2837 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 18:30:42.424881 kubelet[2837]: I0620 18:30:42.424395 2837 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-22-87" Jun 20 18:30:42.453450 kubelet[2837]: I0620 18:30:42.452832 2837 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-87" Jun 20 18:30:42.488615 kubelet[2837]: E0620 18:30:42.488553 2837 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-22-87\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-22-87" Jun 20 18:30:42.488615 kubelet[2837]: I0620 18:30:42.488604 2837 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-87" Jun 20 18:30:42.497654 kubelet[2837]: E0620 18:30:42.496169 2837 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-22-87\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-22-87" Jun 20 18:30:42.497654 kubelet[2837]: I0620 18:30:42.496214 2837 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-87" Jun 20 18:30:42.502300 kubelet[2837]: E0620 18:30:42.502243 2837 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-22-87\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-22-87" Jun 20 18:30:45.986048 systemd[1]: Reload requested from client PID 3125 ('systemctl') (unit session-7.scope)... Jun 20 18:30:45.986748 systemd[1]: Reloading... Jun 20 18:30:46.244687 zram_generator::config[3179]: No configuration found. Jun 20 18:30:46.494047 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:30:46.759955 systemd[1]: Reloading finished in 771 ms. Jun 20 18:30:46.820946 kubelet[2837]: I0620 18:30:46.820876 2837 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:30:46.821857 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:30:46.837494 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:30:46.838270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:30:46.838613 systemd[1]: kubelet.service: Consumed 2.161s CPU time, 128.8M memory peak. Jun 20 18:30:46.848150 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:30:47.217968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:30:47.227219 (kubelet)[3230]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:30:47.343014 kubelet[3230]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:30:47.343014 kubelet[3230]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 18:30:47.343014 kubelet[3230]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:30:47.343014 kubelet[3230]: I0620 18:30:47.342823 3230 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:30:47.358220 kubelet[3230]: I0620 18:30:47.358149 3230 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 18:30:47.358220 kubelet[3230]: I0620 18:30:47.358204 3230 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:30:47.360952 kubelet[3230]: I0620 18:30:47.358741 3230 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 18:30:47.361811 kubelet[3230]: I0620 18:30:47.361747 3230 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jun 20 18:30:47.373740 kubelet[3230]: I0620 18:30:47.372495 3230 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:30:47.386307 kubelet[3230]: E0620 18:30:47.386228 3230 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 18:30:47.386307 kubelet[3230]: I0620 18:30:47.386299 3230 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 18:30:47.392481 kubelet[3230]: I0620 18:30:47.392361 3230 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:30:47.395019 kubelet[3230]: I0620 18:30:47.394940 3230 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:30:47.395312 kubelet[3230]: I0620 18:30:47.395004 3230 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-87","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:30:47.395482 kubelet[3230]: I0620 18:30:47.395323 3230 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:30:47.395482 kubelet[3230]: I0620 18:30:47.395345 3230 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 18:30:47.395482 kubelet[3230]: I0620 18:30:47.395424 3230 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:30:47.397040 kubelet[3230]: I0620 18:30:47.396999 3230 kubelet.go:480] "Attempting to sync node with API server" Jun 20 18:30:47.397040 kubelet[3230]: I0620 18:30:47.397047 3230 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:30:47.397232 kubelet[3230]: I0620 18:30:47.397097 3230 kubelet.go:386] "Adding apiserver pod source" Jun 20 18:30:47.397232 kubelet[3230]: I0620 18:30:47.397126 3230 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:30:47.404349 kubelet[3230]: I0620 18:30:47.404156 3230 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 18:30:47.410018 sudo[3244]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 18:30:47.410725 sudo[3244]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 18:30:47.414400 kubelet[3230]: I0620 18:30:47.414346 3230 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 18:30:47.434018 kubelet[3230]: I0620 18:30:47.433883 3230 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 18:30:47.434018 kubelet[3230]: I0620 18:30:47.433963 3230 server.go:1289] "Started kubelet" Jun 20 18:30:47.445036 kubelet[3230]: I0620 18:30:47.444782 3230 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:30:47.458564 kubelet[3230]: I0620 18:30:47.458485 3230 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:30:47.466469 kubelet[3230]: I0620 18:30:47.465758 3230 server.go:317] "Adding debug handlers to kubelet server" Jun 20 18:30:47.475806 kubelet[3230]: I0620 18:30:47.475608 3230 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:30:47.476732 kubelet[3230]: I0620 18:30:47.476375 3230 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:30:47.496399 kubelet[3230]: I0620 18:30:47.495374 3230 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:30:47.504875 kubelet[3230]: I0620 18:30:47.504832 3230 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 18:30:47.512110 kubelet[3230]: E0620 18:30:47.507342 3230 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-87\" not found" Jun 20 18:30:47.512297 kubelet[3230]: I0620 18:30:47.507874 3230 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 18:30:47.520849 kubelet[3230]: I0620 18:30:47.520807 3230 factory.go:223] Registration of the systemd container factory successfully Jun 20 18:30:47.522667 kubelet[3230]: I0620 18:30:47.522050 3230 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:30:47.525133 kubelet[3230]: I0620 18:30:47.521124 3230 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 18:30:47.525133 kubelet[3230]: I0620 18:30:47.524325 3230 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 18:30:47.525133 kubelet[3230]: I0620 18:30:47.524358 3230 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 18:30:47.525133 kubelet[3230]: I0620 18:30:47.524373 3230 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 18:30:47.525133 kubelet[3230]: E0620 18:30:47.524457 3230 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:30:47.525133 kubelet[3230]: I0620 18:30:47.521317 3230 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:30:47.525133 kubelet[3230]: I0620 18:30:47.510343 3230 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 18:30:47.544793 kubelet[3230]: I0620 18:30:47.544757 3230 factory.go:223] Registration of the containerd container factory successfully Jun 20 18:30:47.554850 kubelet[3230]: E0620 18:30:47.554777 3230 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:30:47.625114 kubelet[3230]: E0620 18:30:47.624860 3230 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 18:30:47.693459 kubelet[3230]: I0620 18:30:47.693108 3230 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 18:30:47.693459 kubelet[3230]: I0620 18:30:47.693141 3230 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 18:30:47.693459 kubelet[3230]: I0620 18:30:47.693177 3230 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:30:47.693459 kubelet[3230]: I0620 18:30:47.693413 3230 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 18:30:47.693459 kubelet[3230]: I0620 18:30:47.693435 3230 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 18:30:47.693459 kubelet[3230]: I0620 18:30:47.693469 3230 policy_none.go:49] "None policy: Start" Jun 20 18:30:47.694660 kubelet[3230]: I0620 18:30:47.693486 3230 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 18:30:47.694660 kubelet[3230]: I0620 18:30:47.693507 3230 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:30:47.694660 kubelet[3230]: I0620 18:30:47.693719 3230 state_mem.go:75] "Updated machine memory state" Jun 20 18:30:47.706127 kubelet[3230]: E0620 18:30:47.704727 3230 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 18:30:47.706127 kubelet[3230]: I0620 18:30:47.704992 3230 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:30:47.706127 kubelet[3230]: I0620 18:30:47.705010 3230 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:30:47.707822 kubelet[3230]: I0620 18:30:47.707792 3230 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:30:47.713538 kubelet[3230]: E0620 18:30:47.713499 3230 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 18:30:47.827846 kubelet[3230]: I0620 18:30:47.826538 3230 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-87" Jun 20 18:30:47.827846 kubelet[3230]: I0620 18:30:47.826656 3230 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-87" Jun 20 18:30:47.827846 kubelet[3230]: I0620 18:30:47.826834 3230 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-87" Jun 20 18:30:47.829954 kubelet[3230]: I0620 18:30:47.829025 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/28e425d89e1341c0780b92282f49a9e6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-87\" (UID: \"28e425d89e1341c0780b92282f49a9e6\") " pod="kube-system/kube-controller-manager-ip-172-31-22-87" Jun 20 18:30:47.829954 kubelet[3230]: I0620 18:30:47.829095 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/28e425d89e1341c0780b92282f49a9e6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-87\" (UID: \"28e425d89e1341c0780b92282f49a9e6\") " pod="kube-system/kube-controller-manager-ip-172-31-22-87" Jun 20 18:30:47.829954 kubelet[3230]: I0620 18:30:47.829135 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/28e425d89e1341c0780b92282f49a9e6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-87\" (UID: \"28e425d89e1341c0780b92282f49a9e6\") " pod="kube-system/kube-controller-manager-ip-172-31-22-87" Jun 20 18:30:47.829954 kubelet[3230]: I0620 18:30:47.829175 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28e425d89e1341c0780b92282f49a9e6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-87\" (UID: \"28e425d89e1341c0780b92282f49a9e6\") " pod="kube-system/kube-controller-manager-ip-172-31-22-87" Jun 20 18:30:47.829954 kubelet[3230]: I0620 18:30:47.829221 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b2a454eea0b9255d2b9c9fd08e834bc-ca-certs\") pod \"kube-apiserver-ip-172-31-22-87\" (UID: \"9b2a454eea0b9255d2b9c9fd08e834bc\") " pod="kube-system/kube-apiserver-ip-172-31-22-87" Jun 20 18:30:47.830483 kubelet[3230]: I0620 18:30:47.829737 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b2a454eea0b9255d2b9c9fd08e834bc-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-87\" (UID: \"9b2a454eea0b9255d2b9c9fd08e834bc\") " pod="kube-system/kube-apiserver-ip-172-31-22-87" Jun 20 18:30:47.830483 kubelet[3230]: I0620 18:30:47.830297 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b2a454eea0b9255d2b9c9fd08e834bc-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-87\" (UID: \"9b2a454eea0b9255d2b9c9fd08e834bc\") " pod="kube-system/kube-apiserver-ip-172-31-22-87" Jun 20 18:30:47.831970 kubelet[3230]: I0620 18:30:47.830673 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28e425d89e1341c0780b92282f49a9e6-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-87\" (UID: \"28e425d89e1341c0780b92282f49a9e6\") " pod="kube-system/kube-controller-manager-ip-172-31-22-87" Jun 20 18:30:47.831970 kubelet[3230]: I0620 18:30:47.829055 3230 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-87" Jun 20 18:30:47.880287 kubelet[3230]: I0620 18:30:47.879983 3230 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-22-87" Jun 20 18:30:47.880287 kubelet[3230]: I0620 18:30:47.880167 3230 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-22-87" Jun 20 18:30:47.931158 kubelet[3230]: I0620 18:30:47.931099 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd643eca72d047add3d6be450d379d9b-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-87\" (UID: \"bd643eca72d047add3d6be450d379d9b\") " pod="kube-system/kube-scheduler-ip-172-31-22-87" Jun 20 18:30:48.379016 sudo[3244]: pam_unix(sudo:session): session closed for user root Jun 20 18:30:48.401322 kubelet[3230]: I0620 18:30:48.401218 3230 apiserver.go:52] "Watching apiserver" Jun 20 18:30:48.425364 kubelet[3230]: I0620 18:30:48.425295 3230 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 18:30:48.722990 kubelet[3230]: I0620 18:30:48.722607 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-87" podStartSLOduration=1.722583142 podStartE2EDuration="1.722583142s" podCreationTimestamp="2025-06-20 18:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:30:48.674918925 +0000 UTC m=+1.437917852" watchObservedRunningTime="2025-06-20 18:30:48.722583142 +0000 UTC m=+1.485582165" Jun 20 18:30:48.783671 kubelet[3230]: I0620 18:30:48.781776 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-87" podStartSLOduration=1.7817259220000001 podStartE2EDuration="1.781725922s" podCreationTimestamp="2025-06-20 18:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:30:48.781725982 +0000 UTC m=+1.544724921" watchObservedRunningTime="2025-06-20 18:30:48.781725922 +0000 UTC m=+1.544724861" Jun 20 18:30:48.783671 kubelet[3230]: I0620 18:30:48.782265 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-87" podStartSLOduration=1.782227126 podStartE2EDuration="1.782227126s" podCreationTimestamp="2025-06-20 18:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:30:48.726944338 +0000 UTC m=+1.489943289" watchObservedRunningTime="2025-06-20 18:30:48.782227126 +0000 UTC m=+1.545226053" Jun 20 18:30:50.677679 kubelet[3230]: I0620 18:30:50.675647 3230 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 18:30:50.677679 kubelet[3230]: I0620 18:30:50.677369 3230 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 18:30:50.678276 containerd[1963]: time="2025-06-20T18:30:50.676596179Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 18:30:50.777718 sudo[2277]: pam_unix(sudo:session): session closed for user root Jun 20 18:30:50.803402 sshd[2276]: Connection closed by 147.75.109.163 port 49252 Jun 20 18:30:50.804436 sshd-session[2274]: pam_unix(sshd:session): session closed for user core Jun 20 18:30:50.812546 systemd[1]: sshd@6-172.31.22.87:22-147.75.109.163:49252.service: Deactivated successfully. Jun 20 18:30:50.817988 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 18:30:50.818542 systemd[1]: session-7.scope: Consumed 12.120s CPU time, 263.7M memory peak. Jun 20 18:30:50.820827 systemd-logind[1929]: Session 7 logged out. Waiting for processes to exit. Jun 20 18:30:50.823079 systemd-logind[1929]: Removed session 7. Jun 20 18:30:50.851433 update_engine[1931]: I20250620 18:30:50.851325 1931 update_attempter.cc:509] Updating boot flags... Jun 20 18:30:50.943816 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3320) Jun 20 18:30:51.411801 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3311) Jun 20 18:30:51.655454 systemd[1]: Created slice kubepods-besteffort-pod739fac5c_dc99_4a3d_80b2_2b73759939f7.slice - libcontainer container kubepods-besteffort-pod739fac5c_dc99_4a3d_80b2_2b73759939f7.slice. Jun 20 18:30:51.666796 kubelet[3230]: I0620 18:30:51.665465 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4v8s\" (UniqueName: \"kubernetes.io/projected/739fac5c-dc99-4a3d-80b2-2b73759939f7-kube-api-access-v4v8s\") pod \"kube-proxy-wfbdv\" (UID: \"739fac5c-dc99-4a3d-80b2-2b73759939f7\") " pod="kube-system/kube-proxy-wfbdv" Jun 20 18:30:51.666796 kubelet[3230]: I0620 18:30:51.665535 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/739fac5c-dc99-4a3d-80b2-2b73759939f7-lib-modules\") pod \"kube-proxy-wfbdv\" (UID: \"739fac5c-dc99-4a3d-80b2-2b73759939f7\") " pod="kube-system/kube-proxy-wfbdv" Jun 20 18:30:51.666796 kubelet[3230]: I0620 18:30:51.665574 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/739fac5c-dc99-4a3d-80b2-2b73759939f7-kube-proxy\") pod \"kube-proxy-wfbdv\" (UID: \"739fac5c-dc99-4a3d-80b2-2b73759939f7\") " pod="kube-system/kube-proxy-wfbdv" Jun 20 18:30:51.666796 kubelet[3230]: I0620 18:30:51.665609 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/739fac5c-dc99-4a3d-80b2-2b73759939f7-xtables-lock\") pod \"kube-proxy-wfbdv\" (UID: \"739fac5c-dc99-4a3d-80b2-2b73759939f7\") " pod="kube-system/kube-proxy-wfbdv" Jun 20 18:30:51.773662 kubelet[3230]: I0620 18:30:51.770845 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-etc-cni-netd\") pod \"cilium-qp887\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " pod="kube-system/cilium-qp887" Jun 20 18:30:51.777984 kubelet[3230]: I0620 18:30:51.774350 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-xtables-lock\") pod \"cilium-qp887\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " pod="kube-system/cilium-qp887" Jun 20 18:30:51.777984 kubelet[3230]: I0620 18:30:51.774414 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-host-proc-sys-net\") pod \"cilium-qp887\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " pod="kube-system/cilium-qp887" Jun 20 18:30:51.777984 kubelet[3230]: I0620 18:30:51.774471 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01009017-aa32-43ee-845e-a0712813ae67-hubble-tls\") pod \"cilium-qp887\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " pod="kube-system/cilium-qp887" Jun 20 18:30:51.777984 kubelet[3230]: I0620 18:30:51.774527 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rgkl\" (UniqueName: \"kubernetes.io/projected/01009017-aa32-43ee-845e-a0712813ae67-kube-api-access-7rgkl\") pod \"cilium-qp887\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " pod="kube-system/cilium-qp887" Jun 20 18:30:51.777984 kubelet[3230]: I0620 18:30:51.774674 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-cilium-run\") pod \"cilium-qp887\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " pod="kube-system/cilium-qp887" Jun 20 18:30:51.777984 kubelet[3230]: I0620 18:30:51.774726 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-bpf-maps\") pod \"cilium-qp887\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " pod="kube-system/cilium-qp887" Jun 20 18:30:51.778401 kubelet[3230]: I0620 18:30:51.774799 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-lib-modules\") pod \"cilium-qp887\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " pod="kube-system/cilium-qp887" Jun 20 18:30:51.778401 kubelet[3230]: I0620 18:30:51.774855 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-cilium-cgroup\") pod \"cilium-qp887\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " pod="kube-system/cilium-qp887" Jun 20 18:30:51.778401 kubelet[3230]: I0620 18:30:51.774903 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01009017-aa32-43ee-845e-a0712813ae67-clustermesh-secrets\") pod \"cilium-qp887\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " pod="kube-system/cilium-qp887" Jun 20 18:30:51.778401 kubelet[3230]: I0620 18:30:51.774942 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01009017-aa32-43ee-845e-a0712813ae67-cilium-config-path\") pod \"cilium-qp887\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " pod="kube-system/cilium-qp887" Jun 20 18:30:51.778401 kubelet[3230]: I0620 18:30:51.774991 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-host-proc-sys-kernel\") pod \"cilium-qp887\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " pod="kube-system/cilium-qp887" Jun 20 18:30:51.778401 kubelet[3230]: I0620 18:30:51.775086 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-hostproc\") pod \"cilium-qp887\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " pod="kube-system/cilium-qp887" Jun 20 18:30:51.779853 kubelet[3230]: I0620 18:30:51.775141 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-cni-path\") pod \"cilium-qp887\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " pod="kube-system/cilium-qp887" Jun 20 18:30:51.828748 systemd[1]: Created slice kubepods-burstable-pod01009017_aa32_43ee_845e_a0712813ae67.slice - libcontainer container kubepods-burstable-pod01009017_aa32_43ee_845e_a0712813ae67.slice. Jun 20 18:30:51.922885 kubelet[3230]: E0620 18:30:51.922233 3230 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 20 18:30:51.922885 kubelet[3230]: E0620 18:30:51.922281 3230 projected.go:194] Error preparing data for projected volume kube-api-access-v4v8s for pod kube-system/kube-proxy-wfbdv: configmap "kube-root-ca.crt" not found Jun 20 18:30:51.922885 kubelet[3230]: E0620 18:30:51.922379 3230 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/739fac5c-dc99-4a3d-80b2-2b73759939f7-kube-api-access-v4v8s podName:739fac5c-dc99-4a3d-80b2-2b73759939f7 nodeName:}" failed. No retries permitted until 2025-06-20 18:30:52.422341518 +0000 UTC m=+5.185340445 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v4v8s" (UniqueName: "kubernetes.io/projected/739fac5c-dc99-4a3d-80b2-2b73759939f7-kube-api-access-v4v8s") pod "kube-proxy-wfbdv" (UID: "739fac5c-dc99-4a3d-80b2-2b73759939f7") : configmap "kube-root-ca.crt" not found Jun 20 18:30:51.966293 kubelet[3230]: E0620 18:30:51.965581 3230 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 20 18:30:51.966293 kubelet[3230]: E0620 18:30:51.965647 3230 projected.go:194] Error preparing data for projected volume kube-api-access-7rgkl for pod kube-system/cilium-qp887: configmap "kube-root-ca.crt" not found Jun 20 18:30:51.966293 kubelet[3230]: E0620 18:30:51.965730 3230 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01009017-aa32-43ee-845e-a0712813ae67-kube-api-access-7rgkl podName:01009017-aa32-43ee-845e-a0712813ae67 nodeName:}" failed. No retries permitted until 2025-06-20 18:30:52.46570233 +0000 UTC m=+5.228701245 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7rgkl" (UniqueName: "kubernetes.io/projected/01009017-aa32-43ee-845e-a0712813ae67-kube-api-access-7rgkl") pod "cilium-qp887" (UID: "01009017-aa32-43ee-845e-a0712813ae67") : configmap "kube-root-ca.crt" not found Jun 20 18:30:52.038995 systemd[1]: Created slice kubepods-besteffort-pod31f4d569_fdd1_43a2_847b_e897730d763a.slice - libcontainer container kubepods-besteffort-pod31f4d569_fdd1_43a2_847b_e897730d763a.slice. Jun 20 18:30:52.082430 kubelet[3230]: I0620 18:30:52.082354 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31f4d569-fdd1-43a2-847b-e897730d763a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-l4pks\" (UID: \"31f4d569-fdd1-43a2-847b-e897730d763a\") " pod="kube-system/cilium-operator-6c4d7847fc-l4pks" Jun 20 18:30:52.082430 kubelet[3230]: I0620 18:30:52.082431 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6ktm\" (UniqueName: \"kubernetes.io/projected/31f4d569-fdd1-43a2-847b-e897730d763a-kube-api-access-r6ktm\") pod \"cilium-operator-6c4d7847fc-l4pks\" (UID: \"31f4d569-fdd1-43a2-847b-e897730d763a\") " pod="kube-system/cilium-operator-6c4d7847fc-l4pks" Jun 20 18:30:52.348662 containerd[1963]: time="2025-06-20T18:30:52.348575196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-l4pks,Uid:31f4d569-fdd1-43a2-847b-e897730d763a,Namespace:kube-system,Attempt:0,}" Jun 20 18:30:52.436716 containerd[1963]: time="2025-06-20T18:30:52.436193484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:30:52.436716 containerd[1963]: time="2025-06-20T18:30:52.436315776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:30:52.436716 containerd[1963]: time="2025-06-20T18:30:52.436353432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:30:52.436716 containerd[1963]: time="2025-06-20T18:30:52.436562592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:30:52.505034 systemd[1]: Started cri-containerd-25805cbb117706ca06893d4e0415bfa68ded4f2f0670588836e1147bc631b10c.scope - libcontainer container 25805cbb117706ca06893d4e0415bfa68ded4f2f0670588836e1147bc631b10c. Jun 20 18:30:52.523366 containerd[1963]: time="2025-06-20T18:30:52.523297609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qp887,Uid:01009017-aa32-43ee-845e-a0712813ae67,Namespace:kube-system,Attempt:0,}" Jun 20 18:30:52.588611 containerd[1963]: time="2025-06-20T18:30:52.588191869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:30:52.588611 containerd[1963]: time="2025-06-20T18:30:52.588295381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:30:52.588611 containerd[1963]: time="2025-06-20T18:30:52.588339589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:30:52.588611 containerd[1963]: time="2025-06-20T18:30:52.588524545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:30:52.596398 containerd[1963]: time="2025-06-20T18:30:52.596297845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-l4pks,Uid:31f4d569-fdd1-43a2-847b-e897730d763a,Namespace:kube-system,Attempt:0,} returns sandbox id \"25805cbb117706ca06893d4e0415bfa68ded4f2f0670588836e1147bc631b10c\"" Jun 20 18:30:52.602320 containerd[1963]: time="2025-06-20T18:30:52.600361333Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 18:30:52.616382 containerd[1963]: time="2025-06-20T18:30:52.616306609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wfbdv,Uid:739fac5c-dc99-4a3d-80b2-2b73759939f7,Namespace:kube-system,Attempt:0,}" Jun 20 18:30:52.644483 systemd[1]: Started cri-containerd-c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9.scope - libcontainer container c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9. Jun 20 18:30:52.687186 containerd[1963]: time="2025-06-20T18:30:52.685838953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:30:52.687186 containerd[1963]: time="2025-06-20T18:30:52.686030773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:30:52.688924 containerd[1963]: time="2025-06-20T18:30:52.686221489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:30:52.689315 containerd[1963]: time="2025-06-20T18:30:52.689131285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:30:52.714186 containerd[1963]: time="2025-06-20T18:30:52.714010742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qp887,Uid:01009017-aa32-43ee-845e-a0712813ae67,Namespace:kube-system,Attempt:0,} returns sandbox id \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\"" Jun 20 18:30:52.738054 systemd[1]: Started cri-containerd-2ad3be7fbb325d74eb5a5089f27d24e6e1875c650937a7123e92335144aaa519.scope - libcontainer container 2ad3be7fbb325d74eb5a5089f27d24e6e1875c650937a7123e92335144aaa519. Jun 20 18:30:52.795900 containerd[1963]: time="2025-06-20T18:30:52.795812210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wfbdv,Uid:739fac5c-dc99-4a3d-80b2-2b73759939f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ad3be7fbb325d74eb5a5089f27d24e6e1875c650937a7123e92335144aaa519\"" Jun 20 18:30:52.808438 containerd[1963]: time="2025-06-20T18:30:52.808116890Z" level=info msg="CreateContainer within sandbox \"2ad3be7fbb325d74eb5a5089f27d24e6e1875c650937a7123e92335144aaa519\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 18:30:52.840964 containerd[1963]: time="2025-06-20T18:30:52.840867230Z" level=info msg="CreateContainer within sandbox \"2ad3be7fbb325d74eb5a5089f27d24e6e1875c650937a7123e92335144aaa519\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f3614bda38eb81c8dfde2f29da3643c69fabda3c4303e481d27171b8cf695c44\"" Jun 20 18:30:52.842891 containerd[1963]: time="2025-06-20T18:30:52.842771810Z" level=info msg="StartContainer for \"f3614bda38eb81c8dfde2f29da3643c69fabda3c4303e481d27171b8cf695c44\"" Jun 20 18:30:52.900972 systemd[1]: Started cri-containerd-f3614bda38eb81c8dfde2f29da3643c69fabda3c4303e481d27171b8cf695c44.scope - libcontainer container f3614bda38eb81c8dfde2f29da3643c69fabda3c4303e481d27171b8cf695c44. Jun 20 18:30:52.987841 containerd[1963]: time="2025-06-20T18:30:52.987755739Z" level=info msg="StartContainer for \"f3614bda38eb81c8dfde2f29da3643c69fabda3c4303e481d27171b8cf695c44\" returns successfully" Jun 20 18:30:53.802605 kubelet[3230]: I0620 18:30:53.802506 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wfbdv" podStartSLOduration=2.802482459 podStartE2EDuration="2.802482459s" podCreationTimestamp="2025-06-20 18:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:30:53.664757798 +0000 UTC m=+6.427756749" watchObservedRunningTime="2025-06-20 18:30:53.802482459 +0000 UTC m=+6.565481374" Jun 20 18:30:54.013901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2075326832.mount: Deactivated successfully. Jun 20 18:30:54.864882 containerd[1963]: time="2025-06-20T18:30:54.864815368Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:54.867101 containerd[1963]: time="2025-06-20T18:30:54.866993860Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jun 20 18:30:54.870148 containerd[1963]: time="2025-06-20T18:30:54.870049252Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:30:54.875300 containerd[1963]: time="2025-06-20T18:30:54.875231248Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.274794819s" Jun 20 18:30:54.875701 containerd[1963]: time="2025-06-20T18:30:54.875511328Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jun 20 18:30:54.879003 containerd[1963]: time="2025-06-20T18:30:54.878543392Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 18:30:54.886447 containerd[1963]: time="2025-06-20T18:30:54.886366840Z" level=info msg="CreateContainer within sandbox \"25805cbb117706ca06893d4e0415bfa68ded4f2f0670588836e1147bc631b10c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 18:30:54.923822 containerd[1963]: time="2025-06-20T18:30:54.923539517Z" level=info msg="CreateContainer within sandbox \"25805cbb117706ca06893d4e0415bfa68ded4f2f0670588836e1147bc631b10c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2\"" Jun 20 18:30:54.924900 containerd[1963]: time="2025-06-20T18:30:54.924848753Z" level=info msg="StartContainer for \"66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2\"" Jun 20 18:30:54.979982 systemd[1]: Started cri-containerd-66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2.scope - libcontainer container 66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2. Jun 20 18:30:55.050511 containerd[1963]: time="2025-06-20T18:30:55.050385265Z" level=info msg="StartContainer for \"66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2\" returns successfully" Jun 20 18:31:02.629006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3368821309.mount: Deactivated successfully. Jun 20 18:31:05.253207 containerd[1963]: time="2025-06-20T18:31:05.253132692Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:31:05.255338 containerd[1963]: time="2025-06-20T18:31:05.255261024Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jun 20 18:31:05.257920 containerd[1963]: time="2025-06-20T18:31:05.257849880Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:31:05.263220 containerd[1963]: time="2025-06-20T18:31:05.263080920Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.384234816s" Jun 20 18:31:05.263220 containerd[1963]: time="2025-06-20T18:31:05.263141172Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jun 20 18:31:05.270749 containerd[1963]: time="2025-06-20T18:31:05.270672528Z" level=info msg="CreateContainer within sandbox \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 18:31:05.294674 containerd[1963]: time="2025-06-20T18:31:05.294549048Z" level=info msg="CreateContainer within sandbox \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f\"" Jun 20 18:31:05.297498 containerd[1963]: time="2025-06-20T18:31:05.297381108Z" level=info msg="StartContainer for \"3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f\"" Jun 20 18:31:05.347666 systemd[1]: run-containerd-runc-k8s.io-3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f-runc.eiS9c4.mount: Deactivated successfully. Jun 20 18:31:05.361933 systemd[1]: Started cri-containerd-3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f.scope - libcontainer container 3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f. Jun 20 18:31:05.414093 containerd[1963]: time="2025-06-20T18:31:05.414009265Z" level=info msg="StartContainer for \"3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f\" returns successfully" Jun 20 18:31:05.432109 systemd[1]: cri-containerd-3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f.scope: Deactivated successfully. Jun 20 18:31:05.713051 kubelet[3230]: I0620 18:31:05.712949 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-l4pks" podStartSLOduration=12.435416423 podStartE2EDuration="14.712925498s" podCreationTimestamp="2025-06-20 18:30:51 +0000 UTC" firstStartedPulling="2025-06-20 18:30:52.599529349 +0000 UTC m=+5.362528276" lastFinishedPulling="2025-06-20 18:30:54.87703834 +0000 UTC m=+7.640037351" observedRunningTime="2025-06-20 18:30:55.746747597 +0000 UTC m=+8.509746560" watchObservedRunningTime="2025-06-20 18:31:05.712925498 +0000 UTC m=+18.475924425" Jun 20 18:31:06.288438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f-rootfs.mount: Deactivated successfully. Jun 20 18:31:06.376674 containerd[1963]: time="2025-06-20T18:31:06.376537213Z" level=info msg="shim disconnected" id=3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f namespace=k8s.io Jun 20 18:31:06.376674 containerd[1963]: time="2025-06-20T18:31:06.376653121Z" level=warning msg="cleaning up after shim disconnected" id=3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f namespace=k8s.io Jun 20 18:31:06.376674 containerd[1963]: time="2025-06-20T18:31:06.376675201Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:31:06.698891 containerd[1963]: time="2025-06-20T18:31:06.698300787Z" level=info msg="CreateContainer within sandbox \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 18:31:06.731873 containerd[1963]: time="2025-06-20T18:31:06.731744367Z" level=info msg="CreateContainer within sandbox \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ec99dc99952365f6217727c9da0e605109abcf655ecda296144cfccf8208ae5d\"" Jun 20 18:31:06.733498 containerd[1963]: time="2025-06-20T18:31:06.733171059Z" level=info msg="StartContainer for \"ec99dc99952365f6217727c9da0e605109abcf655ecda296144cfccf8208ae5d\"" Jun 20 18:31:06.790908 systemd[1]: Started cri-containerd-ec99dc99952365f6217727c9da0e605109abcf655ecda296144cfccf8208ae5d.scope - libcontainer container ec99dc99952365f6217727c9da0e605109abcf655ecda296144cfccf8208ae5d. Jun 20 18:31:06.851664 containerd[1963]: time="2025-06-20T18:31:06.851427760Z" level=info msg="StartContainer for \"ec99dc99952365f6217727c9da0e605109abcf655ecda296144cfccf8208ae5d\" returns successfully" Jun 20 18:31:06.874718 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:31:06.875251 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:31:06.875754 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:31:06.883096 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:31:06.885127 systemd[1]: cri-containerd-ec99dc99952365f6217727c9da0e605109abcf655ecda296144cfccf8208ae5d.scope: Deactivated successfully. Jun 20 18:31:06.925853 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:31:06.938288 containerd[1963]: time="2025-06-20T18:31:06.938161984Z" level=info msg="shim disconnected" id=ec99dc99952365f6217727c9da0e605109abcf655ecda296144cfccf8208ae5d namespace=k8s.io Jun 20 18:31:06.938540 containerd[1963]: time="2025-06-20T18:31:06.938289508Z" level=warning msg="cleaning up after shim disconnected" id=ec99dc99952365f6217727c9da0e605109abcf655ecda296144cfccf8208ae5d namespace=k8s.io Jun 20 18:31:06.938540 containerd[1963]: time="2025-06-20T18:31:06.938311420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:31:07.286949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec99dc99952365f6217727c9da0e605109abcf655ecda296144cfccf8208ae5d-rootfs.mount: Deactivated successfully. Jun 20 18:31:07.703236 containerd[1963]: time="2025-06-20T18:31:07.703041808Z" level=info msg="CreateContainer within sandbox \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 18:31:07.748884 containerd[1963]: time="2025-06-20T18:31:07.748821772Z" level=info msg="CreateContainer within sandbox \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901\"" Jun 20 18:31:07.752157 containerd[1963]: time="2025-06-20T18:31:07.750721156Z" level=info msg="StartContainer for \"072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901\"" Jun 20 18:31:07.812122 systemd[1]: Started cri-containerd-072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901.scope - libcontainer container 072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901. Jun 20 18:31:07.885073 containerd[1963]: time="2025-06-20T18:31:07.885017921Z" level=info msg="StartContainer for \"072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901\" returns successfully" Jun 20 18:31:07.894056 systemd[1]: cri-containerd-072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901.scope: Deactivated successfully. Jun 20 18:31:07.917846 kubelet[3230]: E0620 18:31:07.917756 3230 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01009017_aa32_43ee_845e_a0712813ae67.slice/cri-containerd-072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901.scope\": RecentStats: unable to find data in memory cache]" Jun 20 18:31:07.950240 containerd[1963]: time="2025-06-20T18:31:07.950104925Z" level=info msg="shim disconnected" id=072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901 namespace=k8s.io Jun 20 18:31:07.950240 containerd[1963]: time="2025-06-20T18:31:07.950179025Z" level=warning msg="cleaning up after shim disconnected" id=072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901 namespace=k8s.io Jun 20 18:31:07.950240 containerd[1963]: time="2025-06-20T18:31:07.950198873Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:31:08.286392 systemd[1]: run-containerd-runc-k8s.io-072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901-runc.GdrrcC.mount: Deactivated successfully. Jun 20 18:31:08.286569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901-rootfs.mount: Deactivated successfully. Jun 20 18:31:08.705779 containerd[1963]: time="2025-06-20T18:31:08.705328937Z" level=info msg="CreateContainer within sandbox \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 18:31:08.738662 containerd[1963]: time="2025-06-20T18:31:08.737720633Z" level=info msg="CreateContainer within sandbox \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a72740985dff56e5a8a38560334c30b0e9547b8c737771a5c9e324ddf96b238d\"" Jun 20 18:31:08.743045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1625749916.mount: Deactivated successfully. Jun 20 18:31:08.746239 containerd[1963]: time="2025-06-20T18:31:08.744572753Z" level=info msg="StartContainer for \"a72740985dff56e5a8a38560334c30b0e9547b8c737771a5c9e324ddf96b238d\"" Jun 20 18:31:08.802940 systemd[1]: Started cri-containerd-a72740985dff56e5a8a38560334c30b0e9547b8c737771a5c9e324ddf96b238d.scope - libcontainer container a72740985dff56e5a8a38560334c30b0e9547b8c737771a5c9e324ddf96b238d. Jun 20 18:31:08.848358 systemd[1]: cri-containerd-a72740985dff56e5a8a38560334c30b0e9547b8c737771a5c9e324ddf96b238d.scope: Deactivated successfully. Jun 20 18:31:08.852330 containerd[1963]: time="2025-06-20T18:31:08.851949246Z" level=info msg="StartContainer for \"a72740985dff56e5a8a38560334c30b0e9547b8c737771a5c9e324ddf96b238d\" returns successfully" Jun 20 18:31:08.895239 containerd[1963]: time="2025-06-20T18:31:08.895137042Z" level=info msg="shim disconnected" id=a72740985dff56e5a8a38560334c30b0e9547b8c737771a5c9e324ddf96b238d namespace=k8s.io Jun 20 18:31:08.895239 containerd[1963]: time="2025-06-20T18:31:08.895235454Z" level=warning msg="cleaning up after shim disconnected" id=a72740985dff56e5a8a38560334c30b0e9547b8c737771a5c9e324ddf96b238d namespace=k8s.io Jun 20 18:31:08.895619 containerd[1963]: time="2025-06-20T18:31:08.895279542Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:31:09.286705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a72740985dff56e5a8a38560334c30b0e9547b8c737771a5c9e324ddf96b238d-rootfs.mount: Deactivated successfully. Jun 20 18:31:09.717417 containerd[1963]: time="2025-06-20T18:31:09.717258462Z" level=info msg="CreateContainer within sandbox \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 18:31:09.759708 containerd[1963]: time="2025-06-20T18:31:09.759513018Z" level=info msg="CreateContainer within sandbox \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba\"" Jun 20 18:31:09.763828 containerd[1963]: time="2025-06-20T18:31:09.761008014Z" level=info msg="StartContainer for \"268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba\"" Jun 20 18:31:09.825428 systemd[1]: Started cri-containerd-268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba.scope - libcontainer container 268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba. Jun 20 18:31:09.882151 containerd[1963]: time="2025-06-20T18:31:09.882048187Z" level=info msg="StartContainer for \"268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba\" returns successfully" Jun 20 18:31:10.113798 kubelet[3230]: I0620 18:31:10.111605 3230 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 18:31:10.186921 systemd[1]: Created slice kubepods-burstable-pod217fd2f8_d4a4_4170_a5b1_c8aa8ddc4908.slice - libcontainer container kubepods-burstable-pod217fd2f8_d4a4_4170_a5b1_c8aa8ddc4908.slice. Jun 20 18:31:10.203563 systemd[1]: Created slice kubepods-burstable-pod95d48e29_4e87_4e1d_8bae_ce05b9adbe7b.slice - libcontainer container kubepods-burstable-pod95d48e29_4e87_4e1d_8bae_ce05b9adbe7b.slice. Jun 20 18:31:10.223882 kubelet[3230]: I0620 18:31:10.223804 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg94t\" (UniqueName: \"kubernetes.io/projected/217fd2f8-d4a4-4170-a5b1-c8aa8ddc4908-kube-api-access-kg94t\") pod \"coredns-674b8bbfcf-hq56h\" (UID: \"217fd2f8-d4a4-4170-a5b1-c8aa8ddc4908\") " pod="kube-system/coredns-674b8bbfcf-hq56h" Jun 20 18:31:10.225939 kubelet[3230]: I0620 18:31:10.225864 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95d48e29-4e87-4e1d-8bae-ce05b9adbe7b-config-volume\") pod \"coredns-674b8bbfcf-mls9r\" (UID: \"95d48e29-4e87-4e1d-8bae-ce05b9adbe7b\") " pod="kube-system/coredns-674b8bbfcf-mls9r" Jun 20 18:31:10.226235 kubelet[3230]: I0620 18:31:10.226204 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9ns5\" (UniqueName: \"kubernetes.io/projected/95d48e29-4e87-4e1d-8bae-ce05b9adbe7b-kube-api-access-z9ns5\") pod \"coredns-674b8bbfcf-mls9r\" (UID: \"95d48e29-4e87-4e1d-8bae-ce05b9adbe7b\") " pod="kube-system/coredns-674b8bbfcf-mls9r" Jun 20 18:31:10.226518 kubelet[3230]: I0620 18:31:10.226483 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/217fd2f8-d4a4-4170-a5b1-c8aa8ddc4908-config-volume\") pod \"coredns-674b8bbfcf-hq56h\" (UID: \"217fd2f8-d4a4-4170-a5b1-c8aa8ddc4908\") " pod="kube-system/coredns-674b8bbfcf-hq56h" Jun 20 18:31:10.495464 containerd[1963]: time="2025-06-20T18:31:10.495245010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hq56h,Uid:217fd2f8-d4a4-4170-a5b1-c8aa8ddc4908,Namespace:kube-system,Attempt:0,}" Jun 20 18:31:10.517606 containerd[1963]: time="2025-06-20T18:31:10.517534686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mls9r,Uid:95d48e29-4e87-4e1d-8bae-ce05b9adbe7b,Namespace:kube-system,Attempt:0,}" Jun 20 18:31:12.806308 systemd-networkd[1860]: cilium_host: Link UP Jun 20 18:31:12.806824 systemd-networkd[1860]: cilium_net: Link UP Jun 20 18:31:12.808592 (udev-worker)[4217]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:31:12.809871 (udev-worker)[4215]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:31:12.810075 systemd-networkd[1860]: cilium_net: Gained carrier Jun 20 18:31:12.810486 systemd-networkd[1860]: cilium_host: Gained carrier Jun 20 18:31:12.810860 systemd-networkd[1860]: cilium_net: Gained IPv6LL Jun 20 18:31:12.811271 systemd-networkd[1860]: cilium_host: Gained IPv6LL Jun 20 18:31:13.006532 systemd-networkd[1860]: cilium_vxlan: Link UP Jun 20 18:31:13.006551 systemd-networkd[1860]: cilium_vxlan: Gained carrier Jun 20 18:31:13.545866 kernel: NET: Registered PF_ALG protocol family Jun 20 18:31:14.450914 systemd-networkd[1860]: cilium_vxlan: Gained IPv6LL Jun 20 18:31:14.926476 systemd-networkd[1860]: lxc_health: Link UP Jun 20 18:31:14.937611 (udev-worker)[4256]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:31:14.938914 systemd-networkd[1860]: lxc_health: Gained carrier Jun 20 18:31:15.664690 kernel: eth0: renamed from tmp31607 Jun 20 18:31:15.684615 (udev-worker)[4258]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:31:15.688927 kernel: eth0: renamed from tmpc0d87 Jun 20 18:31:15.696810 systemd-networkd[1860]: lxcf29f38a2baf5: Link UP Jun 20 18:31:15.704811 systemd-networkd[1860]: lxc2bf1606acbe8: Link UP Jun 20 18:31:15.706141 systemd-networkd[1860]: lxcf29f38a2baf5: Gained carrier Jun 20 18:31:15.713813 systemd-networkd[1860]: lxc2bf1606acbe8: Gained carrier Jun 20 18:31:16.564156 kubelet[3230]: I0620 18:31:16.564039 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qp887" podStartSLOduration=13.017842618 podStartE2EDuration="25.56401878s" podCreationTimestamp="2025-06-20 18:30:51 +0000 UTC" firstStartedPulling="2025-06-20 18:30:52.717969326 +0000 UTC m=+5.480968253" lastFinishedPulling="2025-06-20 18:31:05.264145488 +0000 UTC m=+18.027144415" observedRunningTime="2025-06-20 18:31:10.789154147 +0000 UTC m=+23.552153098" watchObservedRunningTime="2025-06-20 18:31:16.56401878 +0000 UTC m=+29.327017707" Jun 20 18:31:16.882886 systemd-networkd[1860]: lxc_health: Gained IPv6LL Jun 20 18:31:17.074042 systemd-networkd[1860]: lxcf29f38a2baf5: Gained IPv6LL Jun 20 18:31:17.458112 systemd-networkd[1860]: lxc2bf1606acbe8: Gained IPv6LL Jun 20 18:31:20.035768 ntpd[1922]: Listen normally on 8 cilium_host 192.168.0.236:123 Jun 20 18:31:20.035927 ntpd[1922]: Listen normally on 9 cilium_net [fe80::9445:a5ff:fe35:297a%4]:123 Jun 20 18:31:20.036404 ntpd[1922]: 20 Jun 18:31:20 ntpd[1922]: Listen normally on 8 cilium_host 192.168.0.236:123 Jun 20 18:31:20.036404 ntpd[1922]: 20 Jun 18:31:20 ntpd[1922]: Listen normally on 9 cilium_net [fe80::9445:a5ff:fe35:297a%4]:123 Jun 20 18:31:20.036404 ntpd[1922]: 20 Jun 18:31:20 ntpd[1922]: Listen normally on 10 cilium_host [fe80::78c1:fbff:fea0:e4bc%5]:123 Jun 20 18:31:20.036404 ntpd[1922]: 20 Jun 18:31:20 ntpd[1922]: Listen normally on 11 cilium_vxlan [fe80::a89e:beff:fe5b:2f92%6]:123 Jun 20 18:31:20.036404 ntpd[1922]: 20 Jun 18:31:20 ntpd[1922]: Listen normally on 12 lxc_health [fe80::a000:57ff:fe6c:b51f%8]:123 Jun 20 18:31:20.036404 ntpd[1922]: 20 Jun 18:31:20 ntpd[1922]: Listen normally on 13 lxc2bf1606acbe8 [fe80::ac13:6aff:fed0:c4ed%10]:123 Jun 20 18:31:20.036404 ntpd[1922]: 20 Jun 18:31:20 ntpd[1922]: Listen normally on 14 lxcf29f38a2baf5 [fe80::94a3:2bff:fe9f:ed1c%12]:123 Jun 20 18:31:20.036019 ntpd[1922]: Listen normally on 10 cilium_host [fe80::78c1:fbff:fea0:e4bc%5]:123 Jun 20 18:31:20.036092 ntpd[1922]: Listen normally on 11 cilium_vxlan [fe80::a89e:beff:fe5b:2f92%6]:123 Jun 20 18:31:20.036163 ntpd[1922]: Listen normally on 12 lxc_health [fe80::a000:57ff:fe6c:b51f%8]:123 Jun 20 18:31:20.036276 ntpd[1922]: Listen normally on 13 lxc2bf1606acbe8 [fe80::ac13:6aff:fed0:c4ed%10]:123 Jun 20 18:31:20.036359 ntpd[1922]: Listen normally on 14 lxcf29f38a2baf5 [fe80::94a3:2bff:fe9f:ed1c%12]:123 Jun 20 18:31:24.755114 containerd[1963]: time="2025-06-20T18:31:24.753923289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:31:24.755114 containerd[1963]: time="2025-06-20T18:31:24.754021305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:31:24.755114 containerd[1963]: time="2025-06-20T18:31:24.754047069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:31:24.756393 containerd[1963]: time="2025-06-20T18:31:24.754205793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:31:24.827168 systemd[1]: run-containerd-runc-k8s.io-316075d7c8cf9a7a7a26a59e2944f418c68fcf55521501ac1cf7774b23260c32-runc.t6nc39.mount: Deactivated successfully. Jun 20 18:31:24.845986 systemd[1]: Started cri-containerd-316075d7c8cf9a7a7a26a59e2944f418c68fcf55521501ac1cf7774b23260c32.scope - libcontainer container 316075d7c8cf9a7a7a26a59e2944f418c68fcf55521501ac1cf7774b23260c32. Jun 20 18:31:24.876784 containerd[1963]: time="2025-06-20T18:31:24.876326853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:31:24.879778 containerd[1963]: time="2025-06-20T18:31:24.877053261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:31:24.880306 containerd[1963]: time="2025-06-20T18:31:24.879876393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:31:24.880306 containerd[1963]: time="2025-06-20T18:31:24.880142781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:31:24.950500 systemd[1]: Started cri-containerd-c0d8783f3b52cf5308a30f19ab3dbb4b7ae06cf5d0406eb62f6fd3822dcc6b6b.scope - libcontainer container c0d8783f3b52cf5308a30f19ab3dbb4b7ae06cf5d0406eb62f6fd3822dcc6b6b. Jun 20 18:31:24.993783 containerd[1963]: time="2025-06-20T18:31:24.993692134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mls9r,Uid:95d48e29-4e87-4e1d-8bae-ce05b9adbe7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"316075d7c8cf9a7a7a26a59e2944f418c68fcf55521501ac1cf7774b23260c32\"" Jun 20 18:31:25.012782 containerd[1963]: time="2025-06-20T18:31:25.010927902Z" level=info msg="CreateContainer within sandbox \"316075d7c8cf9a7a7a26a59e2944f418c68fcf55521501ac1cf7774b23260c32\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:31:25.050381 containerd[1963]: time="2025-06-20T18:31:25.050290182Z" level=info msg="CreateContainer within sandbox \"316075d7c8cf9a7a7a26a59e2944f418c68fcf55521501ac1cf7774b23260c32\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a6db74ec01bffee965192becad4eac3c097d89d6f7e430703c83ee3e06311ba1\"" Jun 20 18:31:25.055224 containerd[1963]: time="2025-06-20T18:31:25.054485238Z" level=info msg="StartContainer for \"a6db74ec01bffee965192becad4eac3c097d89d6f7e430703c83ee3e06311ba1\"" Jun 20 18:31:25.135001 systemd[1]: Started cri-containerd-a6db74ec01bffee965192becad4eac3c097d89d6f7e430703c83ee3e06311ba1.scope - libcontainer container a6db74ec01bffee965192becad4eac3c097d89d6f7e430703c83ee3e06311ba1. Jun 20 18:31:25.146090 containerd[1963]: time="2025-06-20T18:31:25.145983847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hq56h,Uid:217fd2f8-d4a4-4170-a5b1-c8aa8ddc4908,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0d8783f3b52cf5308a30f19ab3dbb4b7ae06cf5d0406eb62f6fd3822dcc6b6b\"" Jun 20 18:31:25.164341 containerd[1963]: time="2025-06-20T18:31:25.164144467Z" level=info msg="CreateContainer within sandbox \"c0d8783f3b52cf5308a30f19ab3dbb4b7ae06cf5d0406eb62f6fd3822dcc6b6b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:31:25.195745 containerd[1963]: time="2025-06-20T18:31:25.195670855Z" level=info msg="CreateContainer within sandbox \"c0d8783f3b52cf5308a30f19ab3dbb4b7ae06cf5d0406eb62f6fd3822dcc6b6b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c0a6fd636c22a2bce9c5c50c7fa7ab1663739dc1648c5a20b05e9c1dd98bcf5f\"" Jun 20 18:31:25.197292 containerd[1963]: time="2025-06-20T18:31:25.197230711Z" level=info msg="StartContainer for \"c0a6fd636c22a2bce9c5c50c7fa7ab1663739dc1648c5a20b05e9c1dd98bcf5f\"" Jun 20 18:31:25.266173 containerd[1963]: time="2025-06-20T18:31:25.265900663Z" level=info msg="StartContainer for \"a6db74ec01bffee965192becad4eac3c097d89d6f7e430703c83ee3e06311ba1\" returns successfully" Jun 20 18:31:25.272781 systemd[1]: Started cri-containerd-c0a6fd636c22a2bce9c5c50c7fa7ab1663739dc1648c5a20b05e9c1dd98bcf5f.scope - libcontainer container c0a6fd636c22a2bce9c5c50c7fa7ab1663739dc1648c5a20b05e9c1dd98bcf5f. Jun 20 18:31:25.369195 containerd[1963]: time="2025-06-20T18:31:25.369125936Z" level=info msg="StartContainer for \"c0a6fd636c22a2bce9c5c50c7fa7ab1663739dc1648c5a20b05e9c1dd98bcf5f\" returns successfully" Jun 20 18:31:25.831697 kubelet[3230]: I0620 18:31:25.828683 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mls9r" podStartSLOduration=34.82856689 podStartE2EDuration="34.82856689s" podCreationTimestamp="2025-06-20 18:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:31:25.82625809 +0000 UTC m=+38.589257041" watchObservedRunningTime="2025-06-20 18:31:25.82856689 +0000 UTC m=+38.591565853" Jun 20 18:31:25.918695 kubelet[3230]: I0620 18:31:25.918202 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hq56h" podStartSLOduration=34.918172258 podStartE2EDuration="34.918172258s" podCreationTimestamp="2025-06-20 18:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:31:25.867959242 +0000 UTC m=+38.630958241" watchObservedRunningTime="2025-06-20 18:31:25.918172258 +0000 UTC m=+38.681171173" Jun 20 18:31:34.435181 systemd[1]: Started sshd@7-172.31.22.87:22-147.75.109.163:48126.service - OpenSSH per-connection server daemon (147.75.109.163:48126). Jun 20 18:31:34.627937 sshd[4797]: Accepted publickey for core from 147.75.109.163 port 48126 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:31:34.630753 sshd-session[4797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:31:34.639212 systemd-logind[1929]: New session 8 of user core. Jun 20 18:31:34.648954 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 18:31:34.926582 sshd[4799]: Connection closed by 147.75.109.163 port 48126 Jun 20 18:31:34.927574 sshd-session[4797]: pam_unix(sshd:session): session closed for user core Jun 20 18:31:34.935311 systemd[1]: sshd@7-172.31.22.87:22-147.75.109.163:48126.service: Deactivated successfully. Jun 20 18:31:34.939933 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 18:31:34.941527 systemd-logind[1929]: Session 8 logged out. Waiting for processes to exit. Jun 20 18:31:34.943598 systemd-logind[1929]: Removed session 8. Jun 20 18:31:39.971195 systemd[1]: Started sshd@8-172.31.22.87:22-147.75.109.163:54964.service - OpenSSH per-connection server daemon (147.75.109.163:54964). Jun 20 18:31:40.171830 sshd[4812]: Accepted publickey for core from 147.75.109.163 port 54964 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:31:40.175826 sshd-session[4812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:31:40.186765 systemd-logind[1929]: New session 9 of user core. Jun 20 18:31:40.195964 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 18:31:40.472671 sshd[4814]: Connection closed by 147.75.109.163 port 54964 Jun 20 18:31:40.471893 sshd-session[4812]: pam_unix(sshd:session): session closed for user core Jun 20 18:31:40.479984 systemd-logind[1929]: Session 9 logged out. Waiting for processes to exit. Jun 20 18:31:40.481214 systemd[1]: sshd@8-172.31.22.87:22-147.75.109.163:54964.service: Deactivated successfully. Jun 20 18:31:40.488065 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 18:31:40.490832 systemd-logind[1929]: Removed session 9. Jun 20 18:31:45.523153 systemd[1]: Started sshd@9-172.31.22.87:22-147.75.109.163:54976.service - OpenSSH per-connection server daemon (147.75.109.163:54976). Jun 20 18:31:45.715016 sshd[4827]: Accepted publickey for core from 147.75.109.163 port 54976 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:31:45.717941 sshd-session[4827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:31:45.726672 systemd-logind[1929]: New session 10 of user core. Jun 20 18:31:45.733949 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 18:31:45.983709 sshd[4829]: Connection closed by 147.75.109.163 port 54976 Jun 20 18:31:45.984557 sshd-session[4827]: pam_unix(sshd:session): session closed for user core Jun 20 18:31:45.991424 systemd[1]: sshd@9-172.31.22.87:22-147.75.109.163:54976.service: Deactivated successfully. Jun 20 18:31:45.995809 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 18:31:45.998944 systemd-logind[1929]: Session 10 logged out. Waiting for processes to exit. Jun 20 18:31:46.001553 systemd-logind[1929]: Removed session 10. Jun 20 18:31:51.030182 systemd[1]: Started sshd@10-172.31.22.87:22-147.75.109.163:52434.service - OpenSSH per-connection server daemon (147.75.109.163:52434). Jun 20 18:31:51.218605 sshd[4844]: Accepted publickey for core from 147.75.109.163 port 52434 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:31:51.221396 sshd-session[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:31:51.232874 systemd-logind[1929]: New session 11 of user core. Jun 20 18:31:51.240923 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 18:31:51.497558 sshd[4846]: Connection closed by 147.75.109.163 port 52434 Jun 20 18:31:51.498493 sshd-session[4844]: pam_unix(sshd:session): session closed for user core Jun 20 18:31:51.507508 systemd[1]: sshd@10-172.31.22.87:22-147.75.109.163:52434.service: Deactivated successfully. Jun 20 18:31:51.512957 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 18:31:51.518270 systemd-logind[1929]: Session 11 logged out. Waiting for processes to exit. Jun 20 18:31:51.544175 systemd[1]: Started sshd@11-172.31.22.87:22-147.75.109.163:52450.service - OpenSSH per-connection server daemon (147.75.109.163:52450). Jun 20 18:31:51.546915 systemd-logind[1929]: Removed session 11. Jun 20 18:31:51.744888 sshd[4857]: Accepted publickey for core from 147.75.109.163 port 52450 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:31:51.746824 sshd-session[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:31:51.756588 systemd-logind[1929]: New session 12 of user core. Jun 20 18:31:51.762936 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 18:31:52.090236 sshd[4860]: Connection closed by 147.75.109.163 port 52450 Jun 20 18:31:52.092195 sshd-session[4857]: pam_unix(sshd:session): session closed for user core Jun 20 18:31:52.102357 systemd[1]: sshd@11-172.31.22.87:22-147.75.109.163:52450.service: Deactivated successfully. Jun 20 18:31:52.110121 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 18:31:52.116539 systemd-logind[1929]: Session 12 logged out. Waiting for processes to exit. Jun 20 18:31:52.142346 systemd[1]: Started sshd@12-172.31.22.87:22-147.75.109.163:52464.service - OpenSSH per-connection server daemon (147.75.109.163:52464). Jun 20 18:31:52.146707 systemd-logind[1929]: Removed session 12. Jun 20 18:31:52.330923 sshd[4869]: Accepted publickey for core from 147.75.109.163 port 52464 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:31:52.334089 sshd-session[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:31:52.342152 systemd-logind[1929]: New session 13 of user core. Jun 20 18:31:52.351977 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 18:31:52.600750 sshd[4872]: Connection closed by 147.75.109.163 port 52464 Jun 20 18:31:52.599604 sshd-session[4869]: pam_unix(sshd:session): session closed for user core Jun 20 18:31:52.606154 systemd[1]: sshd@12-172.31.22.87:22-147.75.109.163:52464.service: Deactivated successfully. Jun 20 18:31:52.611744 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 18:31:52.617439 systemd-logind[1929]: Session 13 logged out. Waiting for processes to exit. Jun 20 18:31:52.621219 systemd-logind[1929]: Removed session 13. Jun 20 18:31:57.642157 systemd[1]: Started sshd@13-172.31.22.87:22-147.75.109.163:45008.service - OpenSSH per-connection server daemon (147.75.109.163:45008). Jun 20 18:31:57.827515 sshd[4887]: Accepted publickey for core from 147.75.109.163 port 45008 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:31:57.830686 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:31:57.842018 systemd-logind[1929]: New session 14 of user core. Jun 20 18:31:57.849064 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 18:31:58.107800 sshd[4889]: Connection closed by 147.75.109.163 port 45008 Jun 20 18:31:58.109807 sshd-session[4887]: pam_unix(sshd:session): session closed for user core Jun 20 18:31:58.124469 systemd[1]: sshd@13-172.31.22.87:22-147.75.109.163:45008.service: Deactivated successfully. Jun 20 18:31:58.131372 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 18:31:58.137083 systemd-logind[1929]: Session 14 logged out. Waiting for processes to exit. Jun 20 18:31:58.140190 systemd-logind[1929]: Removed session 14. Jun 20 18:32:03.152149 systemd[1]: Started sshd@14-172.31.22.87:22-147.75.109.163:45018.service - OpenSSH per-connection server daemon (147.75.109.163:45018). Jun 20 18:32:03.349368 sshd[4901]: Accepted publickey for core from 147.75.109.163 port 45018 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:32:03.352141 sshd-session[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:32:03.364361 systemd-logind[1929]: New session 15 of user core. Jun 20 18:32:03.373059 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 18:32:03.629731 sshd[4903]: Connection closed by 147.75.109.163 port 45018 Jun 20 18:32:03.630340 sshd-session[4901]: pam_unix(sshd:session): session closed for user core Jun 20 18:32:03.636925 systemd[1]: sshd@14-172.31.22.87:22-147.75.109.163:45018.service: Deactivated successfully. Jun 20 18:32:03.641431 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 18:32:03.644136 systemd-logind[1929]: Session 15 logged out. Waiting for processes to exit. Jun 20 18:32:03.646582 systemd-logind[1929]: Removed session 15. Jun 20 18:32:08.675082 systemd[1]: Started sshd@15-172.31.22.87:22-147.75.109.163:52664.service - OpenSSH per-connection server daemon (147.75.109.163:52664). Jun 20 18:32:08.858340 sshd[4915]: Accepted publickey for core from 147.75.109.163 port 52664 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:32:08.860918 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:32:08.868926 systemd-logind[1929]: New session 16 of user core. Jun 20 18:32:08.884944 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 18:32:09.145570 sshd[4917]: Connection closed by 147.75.109.163 port 52664 Jun 20 18:32:09.146460 sshd-session[4915]: pam_unix(sshd:session): session closed for user core Jun 20 18:32:09.151592 systemd-logind[1929]: Session 16 logged out. Waiting for processes to exit. Jun 20 18:32:09.153033 systemd[1]: sshd@15-172.31.22.87:22-147.75.109.163:52664.service: Deactivated successfully. Jun 20 18:32:09.156037 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 18:32:09.160769 systemd-logind[1929]: Removed session 16. Jun 20 18:32:14.188192 systemd[1]: Started sshd@16-172.31.22.87:22-147.75.109.163:52680.service - OpenSSH per-connection server daemon (147.75.109.163:52680). Jun 20 18:32:14.384508 sshd[4931]: Accepted publickey for core from 147.75.109.163 port 52680 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:32:14.388511 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:32:14.399435 systemd-logind[1929]: New session 17 of user core. Jun 20 18:32:14.405033 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 18:32:14.683809 sshd[4933]: Connection closed by 147.75.109.163 port 52680 Jun 20 18:32:14.685094 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Jun 20 18:32:14.693876 systemd[1]: sshd@16-172.31.22.87:22-147.75.109.163:52680.service: Deactivated successfully. Jun 20 18:32:14.698971 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 18:32:14.702060 systemd-logind[1929]: Session 17 logged out. Waiting for processes to exit. Jun 20 18:32:14.719424 systemd-logind[1929]: Removed session 17. Jun 20 18:32:14.726220 systemd[1]: Started sshd@17-172.31.22.87:22-147.75.109.163:52690.service - OpenSSH per-connection server daemon (147.75.109.163:52690). Jun 20 18:32:14.923877 sshd[4943]: Accepted publickey for core from 147.75.109.163 port 52690 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:32:14.928678 sshd-session[4943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:32:14.939760 systemd-logind[1929]: New session 18 of user core. Jun 20 18:32:14.945047 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 18:32:15.238316 sshd[4946]: Connection closed by 147.75.109.163 port 52690 Jun 20 18:32:15.239954 sshd-session[4943]: pam_unix(sshd:session): session closed for user core Jun 20 18:32:15.247066 systemd[1]: sshd@17-172.31.22.87:22-147.75.109.163:52690.service: Deactivated successfully. Jun 20 18:32:15.251959 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 18:32:15.254344 systemd-logind[1929]: Session 18 logged out. Waiting for processes to exit. Jun 20 18:32:15.257132 systemd-logind[1929]: Removed session 18. Jun 20 18:32:15.281132 systemd[1]: Started sshd@18-172.31.22.87:22-147.75.109.163:52702.service - OpenSSH per-connection server daemon (147.75.109.163:52702). Jun 20 18:32:15.464296 sshd[4956]: Accepted publickey for core from 147.75.109.163 port 52702 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:32:15.467547 sshd-session[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:32:15.478418 systemd-logind[1929]: New session 19 of user core. Jun 20 18:32:15.484002 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 18:32:16.882411 sshd[4958]: Connection closed by 147.75.109.163 port 52702 Jun 20 18:32:16.882967 sshd-session[4956]: pam_unix(sshd:session): session closed for user core Jun 20 18:32:16.892361 systemd[1]: sshd@18-172.31.22.87:22-147.75.109.163:52702.service: Deactivated successfully. Jun 20 18:32:16.902178 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 18:32:16.907023 systemd-logind[1929]: Session 19 logged out. Waiting for processes to exit. Jun 20 18:32:16.933823 systemd[1]: Started sshd@19-172.31.22.87:22-147.75.109.163:51662.service - OpenSSH per-connection server daemon (147.75.109.163:51662). Jun 20 18:32:16.938478 systemd-logind[1929]: Removed session 19. Jun 20 18:32:17.146231 sshd[4973]: Accepted publickey for core from 147.75.109.163 port 51662 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:32:17.147997 sshd-session[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:32:17.157419 systemd-logind[1929]: New session 20 of user core. Jun 20 18:32:17.164913 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 18:32:17.664648 sshd[4977]: Connection closed by 147.75.109.163 port 51662 Jun 20 18:32:17.666322 sshd-session[4973]: pam_unix(sshd:session): session closed for user core Jun 20 18:32:17.676166 systemd[1]: sshd@19-172.31.22.87:22-147.75.109.163:51662.service: Deactivated successfully. Jun 20 18:32:17.683441 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 18:32:17.685196 systemd-logind[1929]: Session 20 logged out. Waiting for processes to exit. Jun 20 18:32:17.707292 systemd-logind[1929]: Removed session 20. Jun 20 18:32:17.715209 systemd[1]: Started sshd@20-172.31.22.87:22-147.75.109.163:51664.service - OpenSSH per-connection server daemon (147.75.109.163:51664). Jun 20 18:32:17.905774 sshd[4986]: Accepted publickey for core from 147.75.109.163 port 51664 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:32:17.908510 sshd-session[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:32:17.918045 systemd-logind[1929]: New session 21 of user core. Jun 20 18:32:17.926946 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 18:32:18.167728 sshd[4989]: Connection closed by 147.75.109.163 port 51664 Jun 20 18:32:18.168549 sshd-session[4986]: pam_unix(sshd:session): session closed for user core Jun 20 18:32:18.175237 systemd[1]: sshd@20-172.31.22.87:22-147.75.109.163:51664.service: Deactivated successfully. Jun 20 18:32:18.182818 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 18:32:18.187371 systemd-logind[1929]: Session 21 logged out. Waiting for processes to exit. Jun 20 18:32:18.190576 systemd-logind[1929]: Removed session 21. Jun 20 18:32:21.854513 update_engine[1931]: I20250620 18:32:21.854418 1931 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jun 20 18:32:21.854513 update_engine[1931]: I20250620 18:32:21.854506 1931 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jun 20 18:32:21.855169 update_engine[1931]: I20250620 18:32:21.854820 1931 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jun 20 18:32:21.855767 update_engine[1931]: I20250620 18:32:21.855710 1931 omaha_request_params.cc:62] Current group set to stable Jun 20 18:32:21.856392 update_engine[1931]: I20250620 18:32:21.855888 1931 update_attempter.cc:499] Already updated boot flags. Skipping. Jun 20 18:32:21.856392 update_engine[1931]: I20250620 18:32:21.855914 1931 update_attempter.cc:643] Scheduling an action processor start. Jun 20 18:32:21.856392 update_engine[1931]: I20250620 18:32:21.855948 1931 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 20 18:32:21.856392 update_engine[1931]: I20250620 18:32:21.856014 1931 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jun 20 18:32:21.856392 update_engine[1931]: I20250620 18:32:21.856122 1931 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 20 18:32:21.856392 update_engine[1931]: I20250620 18:32:21.856142 1931 omaha_request_action.cc:272] Request: Jun 20 18:32:21.856392 update_engine[1931]: Jun 20 18:32:21.856392 update_engine[1931]: Jun 20 18:32:21.856392 update_engine[1931]: Jun 20 18:32:21.856392 update_engine[1931]: Jun 20 18:32:21.856392 update_engine[1931]: Jun 20 18:32:21.856392 update_engine[1931]: Jun 20 18:32:21.856392 update_engine[1931]: Jun 20 18:32:21.856392 update_engine[1931]: Jun 20 18:32:21.856392 update_engine[1931]: I20250620 18:32:21.856159 1931 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:32:21.858188 locksmithd[1978]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jun 20 18:32:21.858825 update_engine[1931]: I20250620 18:32:21.858754 1931 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:32:21.859394 update_engine[1931]: I20250620 18:32:21.859326 1931 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:32:21.896775 update_engine[1931]: E20250620 18:32:21.896692 1931 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:32:21.896926 update_engine[1931]: I20250620 18:32:21.896834 1931 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jun 20 18:32:23.212147 systemd[1]: Started sshd@21-172.31.22.87:22-147.75.109.163:51674.service - OpenSSH per-connection server daemon (147.75.109.163:51674). Jun 20 18:32:23.394355 sshd[5001]: Accepted publickey for core from 147.75.109.163 port 51674 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:32:23.397489 sshd-session[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:32:23.406478 systemd-logind[1929]: New session 22 of user core. Jun 20 18:32:23.414905 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 18:32:23.665075 sshd[5005]: Connection closed by 147.75.109.163 port 51674 Jun 20 18:32:23.667920 sshd-session[5001]: pam_unix(sshd:session): session closed for user core Jun 20 18:32:23.673896 systemd[1]: sshd@21-172.31.22.87:22-147.75.109.163:51674.service: Deactivated successfully. Jun 20 18:32:23.679018 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 18:32:23.680582 systemd-logind[1929]: Session 22 logged out. Waiting for processes to exit. Jun 20 18:32:23.682876 systemd-logind[1929]: Removed session 22. Jun 20 18:32:28.710137 systemd[1]: Started sshd@22-172.31.22.87:22-147.75.109.163:35444.service - OpenSSH per-connection server daemon (147.75.109.163:35444). Jun 20 18:32:28.890232 sshd[5020]: Accepted publickey for core from 147.75.109.163 port 35444 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:32:28.892781 sshd-session[5020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:32:28.902004 systemd-logind[1929]: New session 23 of user core. Jun 20 18:32:28.910245 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 18:32:29.159763 sshd[5022]: Connection closed by 147.75.109.163 port 35444 Jun 20 18:32:29.160737 sshd-session[5020]: pam_unix(sshd:session): session closed for user core Jun 20 18:32:29.168719 systemd[1]: sshd@22-172.31.22.87:22-147.75.109.163:35444.service: Deactivated successfully. Jun 20 18:32:29.174241 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 18:32:29.177010 systemd-logind[1929]: Session 23 logged out. Waiting for processes to exit. Jun 20 18:32:29.178866 systemd-logind[1929]: Removed session 23. Jun 20 18:32:31.853145 update_engine[1931]: I20250620 18:32:31.853044 1931 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:32:31.854186 update_engine[1931]: I20250620 18:32:31.853415 1931 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:32:31.854186 update_engine[1931]: I20250620 18:32:31.853785 1931 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:32:31.855499 update_engine[1931]: E20250620 18:32:31.855180 1931 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:32:31.855499 update_engine[1931]: I20250620 18:32:31.855288 1931 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jun 20 18:32:34.203197 systemd[1]: Started sshd@23-172.31.22.87:22-147.75.109.163:35454.service - OpenSSH per-connection server daemon (147.75.109.163:35454). Jun 20 18:32:34.393357 sshd[5034]: Accepted publickey for core from 147.75.109.163 port 35454 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:32:34.396462 sshd-session[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:32:34.405814 systemd-logind[1929]: New session 24 of user core. Jun 20 18:32:34.414932 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 18:32:34.663750 sshd[5036]: Connection closed by 147.75.109.163 port 35454 Jun 20 18:32:34.664617 sshd-session[5034]: pam_unix(sshd:session): session closed for user core Jun 20 18:32:34.671133 systemd[1]: sshd@23-172.31.22.87:22-147.75.109.163:35454.service: Deactivated successfully. Jun 20 18:32:34.676516 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 18:32:34.678266 systemd-logind[1929]: Session 24 logged out. Waiting for processes to exit. Jun 20 18:32:34.680696 systemd-logind[1929]: Removed session 24. Jun 20 18:32:34.708136 systemd[1]: Started sshd@24-172.31.22.87:22-147.75.109.163:35460.service - OpenSSH per-connection server daemon (147.75.109.163:35460). Jun 20 18:32:34.890693 sshd[5048]: Accepted publickey for core from 147.75.109.163 port 35460 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:32:34.893124 sshd-session[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:32:34.903687 systemd-logind[1929]: New session 25 of user core. Jun 20 18:32:34.909980 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 18:32:36.887704 containerd[1963]: time="2025-06-20T18:32:36.887495959Z" level=info msg="StopContainer for \"66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2\" with timeout 30 (s)" Jun 20 18:32:36.890394 containerd[1963]: time="2025-06-20T18:32:36.889765207Z" level=info msg="Stop container \"66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2\" with signal terminated" Jun 20 18:32:36.944792 containerd[1963]: time="2025-06-20T18:32:36.944719147Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:32:36.946795 systemd[1]: cri-containerd-66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2.scope: Deactivated successfully. Jun 20 18:32:36.966489 containerd[1963]: time="2025-06-20T18:32:36.966422779Z" level=info msg="StopContainer for \"268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba\" with timeout 2 (s)" Jun 20 18:32:36.967530 containerd[1963]: time="2025-06-20T18:32:36.967334947Z" level=info msg="Stop container \"268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba\" with signal terminated" Jun 20 18:32:37.008305 systemd-networkd[1860]: lxc_health: Link DOWN Jun 20 18:32:37.009110 systemd-networkd[1860]: lxc_health: Lost carrier Jun 20 18:32:37.011885 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2-rootfs.mount: Deactivated successfully. Jun 20 18:32:37.029263 containerd[1963]: time="2025-06-20T18:32:37.028572808Z" level=info msg="shim disconnected" id=66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2 namespace=k8s.io Jun 20 18:32:37.029263 containerd[1963]: time="2025-06-20T18:32:37.028830736Z" level=warning msg="cleaning up after shim disconnected" id=66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2 namespace=k8s.io Jun 20 18:32:37.029263 containerd[1963]: time="2025-06-20T18:32:37.028854952Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:32:37.033097 systemd[1]: cri-containerd-268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba.scope: Deactivated successfully. Jun 20 18:32:37.036526 systemd[1]: cri-containerd-268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba.scope: Consumed 15.313s CPU time, 124.7M memory peak, 136K read from disk, 12.9M written to disk. Jun 20 18:32:37.075686 containerd[1963]: time="2025-06-20T18:32:37.074281336Z" level=warning msg="cleanup warnings time=\"2025-06-20T18:32:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 18:32:37.081131 containerd[1963]: time="2025-06-20T18:32:37.080985004Z" level=info msg="StopContainer for \"66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2\" returns successfully" Jun 20 18:32:37.081977 containerd[1963]: time="2025-06-20T18:32:37.081925348Z" level=info msg="StopPodSandbox for \"25805cbb117706ca06893d4e0415bfa68ded4f2f0670588836e1147bc631b10c\"" Jun 20 18:32:37.082172 containerd[1963]: time="2025-06-20T18:32:37.082002388Z" level=info msg="Container to stop \"66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:32:37.088052 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25805cbb117706ca06893d4e0415bfa68ded4f2f0670588836e1147bc631b10c-shm.mount: Deactivated successfully. Jun 20 18:32:37.104863 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba-rootfs.mount: Deactivated successfully. Jun 20 18:32:37.109436 systemd[1]: cri-containerd-25805cbb117706ca06893d4e0415bfa68ded4f2f0670588836e1147bc631b10c.scope: Deactivated successfully. Jun 20 18:32:37.119742 containerd[1963]: time="2025-06-20T18:32:37.118507348Z" level=info msg="shim disconnected" id=268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba namespace=k8s.io Jun 20 18:32:37.119742 containerd[1963]: time="2025-06-20T18:32:37.118934476Z" level=warning msg="cleaning up after shim disconnected" id=268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba namespace=k8s.io Jun 20 18:32:37.119742 containerd[1963]: time="2025-06-20T18:32:37.118960084Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:32:37.159914 containerd[1963]: time="2025-06-20T18:32:37.158096716Z" level=info msg="StopContainer for \"268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba\" returns successfully" Jun 20 18:32:37.162064 containerd[1963]: time="2025-06-20T18:32:37.161918740Z" level=info msg="StopPodSandbox for \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\"" Jun 20 18:32:37.162064 containerd[1963]: time="2025-06-20T18:32:37.162020776Z" level=info msg="Container to stop \"3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:32:37.162295 containerd[1963]: time="2025-06-20T18:32:37.162068140Z" level=info msg="Container to stop \"072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:32:37.162295 containerd[1963]: time="2025-06-20T18:32:37.162092128Z" level=info msg="Container to stop \"268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:32:37.162295 containerd[1963]: time="2025-06-20T18:32:37.162118528Z" level=info msg="Container to stop \"a72740985dff56e5a8a38560334c30b0e9547b8c737771a5c9e324ddf96b238d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:32:37.162295 containerd[1963]: time="2025-06-20T18:32:37.162151864Z" level=info msg="Container to stop \"ec99dc99952365f6217727c9da0e605109abcf655ecda296144cfccf8208ae5d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:32:37.167810 containerd[1963]: time="2025-06-20T18:32:37.167730904Z" level=info msg="shim disconnected" id=25805cbb117706ca06893d4e0415bfa68ded4f2f0670588836e1147bc631b10c namespace=k8s.io Jun 20 18:32:37.168267 containerd[1963]: time="2025-06-20T18:32:37.168006808Z" level=warning msg="cleaning up after shim disconnected" id=25805cbb117706ca06893d4e0415bfa68ded4f2f0670588836e1147bc631b10c namespace=k8s.io Jun 20 18:32:37.168267 containerd[1963]: time="2025-06-20T18:32:37.168036232Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:32:37.187675 systemd[1]: cri-containerd-c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9.scope: Deactivated successfully. Jun 20 18:32:37.208336 containerd[1963]: time="2025-06-20T18:32:37.208244561Z" level=info msg="TearDown network for sandbox \"25805cbb117706ca06893d4e0415bfa68ded4f2f0670588836e1147bc631b10c\" successfully" Jun 20 18:32:37.208336 containerd[1963]: time="2025-06-20T18:32:37.208296749Z" level=info msg="StopPodSandbox for \"25805cbb117706ca06893d4e0415bfa68ded4f2f0670588836e1147bc631b10c\" returns successfully" Jun 20 18:32:37.254677 containerd[1963]: time="2025-06-20T18:32:37.254490605Z" level=info msg="shim disconnected" id=c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9 namespace=k8s.io Jun 20 18:32:37.254677 containerd[1963]: time="2025-06-20T18:32:37.254579357Z" level=warning msg="cleaning up after shim disconnected" id=c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9 namespace=k8s.io Jun 20 18:32:37.254677 containerd[1963]: time="2025-06-20T18:32:37.254598977Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:32:37.278084 containerd[1963]: time="2025-06-20T18:32:37.277876001Z" level=info msg="TearDown network for sandbox \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\" successfully" Jun 20 18:32:37.278084 containerd[1963]: time="2025-06-20T18:32:37.277948373Z" level=info msg="StopPodSandbox for \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\" returns successfully" Jun 20 18:32:37.364965 kubelet[3230]: I0620 18:32:37.364893 3230 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31f4d569-fdd1-43a2-847b-e897730d763a-cilium-config-path\") pod \"31f4d569-fdd1-43a2-847b-e897730d763a\" (UID: \"31f4d569-fdd1-43a2-847b-e897730d763a\") " Jun 20 18:32:37.365544 kubelet[3230]: I0620 18:32:37.364981 3230 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6ktm\" (UniqueName: \"kubernetes.io/projected/31f4d569-fdd1-43a2-847b-e897730d763a-kube-api-access-r6ktm\") pod \"31f4d569-fdd1-43a2-847b-e897730d763a\" (UID: \"31f4d569-fdd1-43a2-847b-e897730d763a\") " Jun 20 18:32:37.372234 kubelet[3230]: I0620 18:32:37.372151 3230 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31f4d569-fdd1-43a2-847b-e897730d763a-kube-api-access-r6ktm" (OuterVolumeSpecName: "kube-api-access-r6ktm") pod "31f4d569-fdd1-43a2-847b-e897730d763a" (UID: "31f4d569-fdd1-43a2-847b-e897730d763a"). InnerVolumeSpecName "kube-api-access-r6ktm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:32:37.372756 kubelet[3230]: I0620 18:32:37.372712 3230 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31f4d569-fdd1-43a2-847b-e897730d763a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "31f4d569-fdd1-43a2-847b-e897730d763a" (UID: "31f4d569-fdd1-43a2-847b-e897730d763a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 18:32:37.466390 kubelet[3230]: I0620 18:32:37.466229 3230 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01009017-aa32-43ee-845e-a0712813ae67-hubble-tls\") pod \"01009017-aa32-43ee-845e-a0712813ae67\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " Jun 20 18:32:37.466390 kubelet[3230]: I0620 18:32:37.466295 3230 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-cilium-run\") pod \"01009017-aa32-43ee-845e-a0712813ae67\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " Jun 20 18:32:37.466390 kubelet[3230]: I0620 18:32:37.466336 3230 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01009017-aa32-43ee-845e-a0712813ae67-clustermesh-secrets\") pod \"01009017-aa32-43ee-845e-a0712813ae67\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " Jun 20 18:32:37.466390 kubelet[3230]: I0620 18:32:37.466373 3230 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-hostproc\") pod \"01009017-aa32-43ee-845e-a0712813ae67\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " Jun 20 18:32:37.466772 kubelet[3230]: I0620 18:32:37.466426 3230 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-etc-cni-netd\") pod \"01009017-aa32-43ee-845e-a0712813ae67\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " Jun 20 18:32:37.466772 kubelet[3230]: I0620 18:32:37.466467 3230 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-cni-path\") pod \"01009017-aa32-43ee-845e-a0712813ae67\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " Jun 20 18:32:37.466772 kubelet[3230]: I0620 18:32:37.466505 3230 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-host-proc-sys-net\") pod \"01009017-aa32-43ee-845e-a0712813ae67\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " Jun 20 18:32:37.466772 kubelet[3230]: I0620 18:32:37.466538 3230 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-bpf-maps\") pod \"01009017-aa32-43ee-845e-a0712813ae67\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " Jun 20 18:32:37.466772 kubelet[3230]: I0620 18:32:37.466573 3230 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-lib-modules\") pod \"01009017-aa32-43ee-845e-a0712813ae67\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " Jun 20 18:32:37.466772 kubelet[3230]: I0620 18:32:37.466611 3230 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-host-proc-sys-kernel\") pod \"01009017-aa32-43ee-845e-a0712813ae67\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " Jun 20 18:32:37.467089 kubelet[3230]: I0620 18:32:37.466684 3230 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-xtables-lock\") pod \"01009017-aa32-43ee-845e-a0712813ae67\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " Jun 20 18:32:37.467089 kubelet[3230]: I0620 18:32:37.466720 3230 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-cilium-cgroup\") pod \"01009017-aa32-43ee-845e-a0712813ae67\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " Jun 20 18:32:37.467089 kubelet[3230]: I0620 18:32:37.466760 3230 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rgkl\" (UniqueName: \"kubernetes.io/projected/01009017-aa32-43ee-845e-a0712813ae67-kube-api-access-7rgkl\") pod \"01009017-aa32-43ee-845e-a0712813ae67\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " Jun 20 18:32:37.467089 kubelet[3230]: I0620 18:32:37.466798 3230 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01009017-aa32-43ee-845e-a0712813ae67-cilium-config-path\") pod \"01009017-aa32-43ee-845e-a0712813ae67\" (UID: \"01009017-aa32-43ee-845e-a0712813ae67\") " Jun 20 18:32:37.467089 kubelet[3230]: I0620 18:32:37.466872 3230 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31f4d569-fdd1-43a2-847b-e897730d763a-cilium-config-path\") on node \"ip-172-31-22-87\" DevicePath \"\"" Jun 20 18:32:37.467089 kubelet[3230]: I0620 18:32:37.466896 3230 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r6ktm\" (UniqueName: \"kubernetes.io/projected/31f4d569-fdd1-43a2-847b-e897730d763a-kube-api-access-r6ktm\") on node \"ip-172-31-22-87\" DevicePath \"\"" Jun 20 18:32:37.468557 kubelet[3230]: I0620 18:32:37.467697 3230 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "01009017-aa32-43ee-845e-a0712813ae67" (UID: "01009017-aa32-43ee-845e-a0712813ae67"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:32:37.470772 kubelet[3230]: I0620 18:32:37.469037 3230 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "01009017-aa32-43ee-845e-a0712813ae67" (UID: "01009017-aa32-43ee-845e-a0712813ae67"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:32:37.470772 kubelet[3230]: I0620 18:32:37.469038 3230 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "01009017-aa32-43ee-845e-a0712813ae67" (UID: "01009017-aa32-43ee-845e-a0712813ae67"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:32:37.470772 kubelet[3230]: I0620 18:32:37.469110 3230 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "01009017-aa32-43ee-845e-a0712813ae67" (UID: "01009017-aa32-43ee-845e-a0712813ae67"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:32:37.470772 kubelet[3230]: I0620 18:32:37.469148 3230 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "01009017-aa32-43ee-845e-a0712813ae67" (UID: "01009017-aa32-43ee-845e-a0712813ae67"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:32:37.470772 kubelet[3230]: I0620 18:32:37.469188 3230 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "01009017-aa32-43ee-845e-a0712813ae67" (UID: "01009017-aa32-43ee-845e-a0712813ae67"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:32:37.471076 kubelet[3230]: I0620 18:32:37.469225 3230 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "01009017-aa32-43ee-845e-a0712813ae67" (UID: "01009017-aa32-43ee-845e-a0712813ae67"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:32:37.471385 kubelet[3230]: I0620 18:32:37.471306 3230 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-hostproc" (OuterVolumeSpecName: "hostproc") pod "01009017-aa32-43ee-845e-a0712813ae67" (UID: "01009017-aa32-43ee-845e-a0712813ae67"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:32:37.471570 kubelet[3230]: I0620 18:32:37.471531 3230 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "01009017-aa32-43ee-845e-a0712813ae67" (UID: "01009017-aa32-43ee-845e-a0712813ae67"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:32:37.471729 kubelet[3230]: I0620 18:32:37.471703 3230 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-cni-path" (OuterVolumeSpecName: "cni-path") pod "01009017-aa32-43ee-845e-a0712813ae67" (UID: "01009017-aa32-43ee-845e-a0712813ae67"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:32:37.477245 kubelet[3230]: I0620 18:32:37.477176 3230 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01009017-aa32-43ee-845e-a0712813ae67-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "01009017-aa32-43ee-845e-a0712813ae67" (UID: "01009017-aa32-43ee-845e-a0712813ae67"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:32:37.477472 kubelet[3230]: I0620 18:32:37.477428 3230 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01009017-aa32-43ee-845e-a0712813ae67-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "01009017-aa32-43ee-845e-a0712813ae67" (UID: "01009017-aa32-43ee-845e-a0712813ae67"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 18:32:37.478856 kubelet[3230]: I0620 18:32:37.478799 3230 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01009017-aa32-43ee-845e-a0712813ae67-kube-api-access-7rgkl" (OuterVolumeSpecName: "kube-api-access-7rgkl") pod "01009017-aa32-43ee-845e-a0712813ae67" (UID: "01009017-aa32-43ee-845e-a0712813ae67"). InnerVolumeSpecName "kube-api-access-7rgkl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:32:37.480332 kubelet[3230]: I0620 18:32:37.480271 3230 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01009017-aa32-43ee-845e-a0712813ae67-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "01009017-aa32-43ee-845e-a0712813ae67" (UID: "01009017-aa32-43ee-845e-a0712813ae67"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 18:32:37.543394 systemd[1]: Removed slice kubepods-burstable-pod01009017_aa32_43ee_845e_a0712813ae67.slice - libcontainer container kubepods-burstable-pod01009017_aa32_43ee_845e_a0712813ae67.slice. Jun 20 18:32:37.544286 systemd[1]: kubepods-burstable-pod01009017_aa32_43ee_845e_a0712813ae67.slice: Consumed 15.463s CPU time, 125.1M memory peak, 136K read from disk, 12.9M written to disk. Jun 20 18:32:37.547772 systemd[1]: Removed slice kubepods-besteffort-pod31f4d569_fdd1_43a2_847b_e897730d763a.slice - libcontainer container kubepods-besteffort-pod31f4d569_fdd1_43a2_847b_e897730d763a.slice. Jun 20 18:32:37.567327 kubelet[3230]: I0620 18:32:37.567277 3230 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-xtables-lock\") on node \"ip-172-31-22-87\" DevicePath \"\"" Jun 20 18:32:37.567327 kubelet[3230]: I0620 18:32:37.567329 3230 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-cilium-cgroup\") on node \"ip-172-31-22-87\" DevicePath \"\"" Jun 20 18:32:37.567573 kubelet[3230]: I0620 18:32:37.567355 3230 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7rgkl\" (UniqueName: \"kubernetes.io/projected/01009017-aa32-43ee-845e-a0712813ae67-kube-api-access-7rgkl\") on node \"ip-172-31-22-87\" DevicePath \"\"" Jun 20 18:32:37.567573 kubelet[3230]: I0620 18:32:37.567379 3230 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01009017-aa32-43ee-845e-a0712813ae67-cilium-config-path\") on node \"ip-172-31-22-87\" DevicePath \"\"" Jun 20 18:32:37.567573 kubelet[3230]: I0620 18:32:37.567402 3230 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01009017-aa32-43ee-845e-a0712813ae67-hubble-tls\") on node \"ip-172-31-22-87\" DevicePath \"\"" Jun 20 18:32:37.567573 kubelet[3230]: I0620 18:32:37.567426 3230 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-cilium-run\") on node \"ip-172-31-22-87\" DevicePath \"\"" Jun 20 18:32:37.567573 kubelet[3230]: I0620 18:32:37.567447 3230 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01009017-aa32-43ee-845e-a0712813ae67-clustermesh-secrets\") on node \"ip-172-31-22-87\" DevicePath \"\"" Jun 20 18:32:37.567573 kubelet[3230]: I0620 18:32:37.567468 3230 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-hostproc\") on node \"ip-172-31-22-87\" DevicePath \"\"" Jun 20 18:32:37.567573 kubelet[3230]: I0620 18:32:37.567488 3230 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-etc-cni-netd\") on node \"ip-172-31-22-87\" DevicePath \"\"" Jun 20 18:32:37.567573 kubelet[3230]: I0620 18:32:37.567509 3230 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-cni-path\") on node \"ip-172-31-22-87\" DevicePath \"\"" Jun 20 18:32:37.568115 kubelet[3230]: I0620 18:32:37.567529 3230 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-host-proc-sys-net\") on node \"ip-172-31-22-87\" DevicePath \"\"" Jun 20 18:32:37.568115 kubelet[3230]: I0620 18:32:37.567548 3230 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-bpf-maps\") on node \"ip-172-31-22-87\" DevicePath \"\"" Jun 20 18:32:37.568115 kubelet[3230]: I0620 18:32:37.567568 3230 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-lib-modules\") on node \"ip-172-31-22-87\" DevicePath \"\"" Jun 20 18:32:37.568115 kubelet[3230]: I0620 18:32:37.567588 3230 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01009017-aa32-43ee-845e-a0712813ae67-host-proc-sys-kernel\") on node \"ip-172-31-22-87\" DevicePath \"\"" Jun 20 18:32:37.742440 kubelet[3230]: E0620 18:32:37.742185 3230 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 18:32:37.900710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9-rootfs.mount: Deactivated successfully. Jun 20 18:32:37.900883 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9-shm.mount: Deactivated successfully. Jun 20 18:32:37.901021 systemd[1]: var-lib-kubelet-pods-01009017\x2daa32\x2d43ee\x2d845e\x2da0712813ae67-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7rgkl.mount: Deactivated successfully. Jun 20 18:32:37.901158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25805cbb117706ca06893d4e0415bfa68ded4f2f0670588836e1147bc631b10c-rootfs.mount: Deactivated successfully. Jun 20 18:32:37.901284 systemd[1]: var-lib-kubelet-pods-31f4d569\x2dfdd1\x2d43a2\x2d847b\x2de897730d763a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr6ktm.mount: Deactivated successfully. Jun 20 18:32:37.901425 systemd[1]: var-lib-kubelet-pods-01009017\x2daa32\x2d43ee\x2d845e\x2da0712813ae67-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 18:32:37.902262 systemd[1]: var-lib-kubelet-pods-01009017\x2daa32\x2d43ee\x2d845e\x2da0712813ae67-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 18:32:37.997521 kubelet[3230]: I0620 18:32:37.996140 3230 scope.go:117] "RemoveContainer" containerID="268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba" Jun 20 18:32:38.002739 containerd[1963]: time="2025-06-20T18:32:38.002359901Z" level=info msg="RemoveContainer for \"268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba\"" Jun 20 18:32:38.030391 containerd[1963]: time="2025-06-20T18:32:38.030238877Z" level=info msg="RemoveContainer for \"268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba\" returns successfully" Jun 20 18:32:38.031232 kubelet[3230]: I0620 18:32:38.031017 3230 scope.go:117] "RemoveContainer" containerID="a72740985dff56e5a8a38560334c30b0e9547b8c737771a5c9e324ddf96b238d" Jun 20 18:32:38.035778 containerd[1963]: time="2025-06-20T18:32:38.034908521Z" level=info msg="RemoveContainer for \"a72740985dff56e5a8a38560334c30b0e9547b8c737771a5c9e324ddf96b238d\"" Jun 20 18:32:38.047300 containerd[1963]: time="2025-06-20T18:32:38.047024081Z" level=info msg="RemoveContainer for \"a72740985dff56e5a8a38560334c30b0e9547b8c737771a5c9e324ddf96b238d\" returns successfully" Jun 20 18:32:38.048292 kubelet[3230]: I0620 18:32:38.047957 3230 scope.go:117] "RemoveContainer" containerID="072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901" Jun 20 18:32:38.055225 containerd[1963]: time="2025-06-20T18:32:38.053882237Z" level=info msg="RemoveContainer for \"072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901\"" Jun 20 18:32:38.064668 containerd[1963]: time="2025-06-20T18:32:38.064595081Z" level=info msg="RemoveContainer for \"072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901\" returns successfully" Jun 20 18:32:38.065318 kubelet[3230]: I0620 18:32:38.065164 3230 scope.go:117] "RemoveContainer" containerID="ec99dc99952365f6217727c9da0e605109abcf655ecda296144cfccf8208ae5d" Jun 20 18:32:38.069866 containerd[1963]: time="2025-06-20T18:32:38.069807713Z" level=info msg="RemoveContainer for \"ec99dc99952365f6217727c9da0e605109abcf655ecda296144cfccf8208ae5d\"" Jun 20 18:32:38.077844 containerd[1963]: time="2025-06-20T18:32:38.077782601Z" level=info msg="RemoveContainer for \"ec99dc99952365f6217727c9da0e605109abcf655ecda296144cfccf8208ae5d\" returns successfully" Jun 20 18:32:38.078768 kubelet[3230]: I0620 18:32:38.078559 3230 scope.go:117] "RemoveContainer" containerID="3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f" Jun 20 18:32:38.081197 containerd[1963]: time="2025-06-20T18:32:38.081143129Z" level=info msg="RemoveContainer for \"3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f\"" Jun 20 18:32:38.087441 containerd[1963]: time="2025-06-20T18:32:38.087378833Z" level=info msg="RemoveContainer for \"3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f\" returns successfully" Jun 20 18:32:38.088184 kubelet[3230]: I0620 18:32:38.088026 3230 scope.go:117] "RemoveContainer" containerID="268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba" Jun 20 18:32:38.088644 containerd[1963]: time="2025-06-20T18:32:38.088578317Z" level=error msg="ContainerStatus for \"268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba\": not found" Jun 20 18:32:38.088979 kubelet[3230]: E0620 18:32:38.088935 3230 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba\": not found" containerID="268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba" Jun 20 18:32:38.089107 kubelet[3230]: I0620 18:32:38.089013 3230 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba"} err="failed to get container status \"268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"268974a64f89b1aac4a7054821747abad5ee6812762b4f7992a28ca0786903ba\": not found" Jun 20 18:32:38.089168 kubelet[3230]: I0620 18:32:38.089111 3230 scope.go:117] "RemoveContainer" containerID="a72740985dff56e5a8a38560334c30b0e9547b8c737771a5c9e324ddf96b238d" Jun 20 18:32:38.089567 containerd[1963]: time="2025-06-20T18:32:38.089516069Z" level=error msg="ContainerStatus for \"a72740985dff56e5a8a38560334c30b0e9547b8c737771a5c9e324ddf96b238d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a72740985dff56e5a8a38560334c30b0e9547b8c737771a5c9e324ddf96b238d\": not found" Jun 20 18:32:38.090031 kubelet[3230]: E0620 18:32:38.089995 3230 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a72740985dff56e5a8a38560334c30b0e9547b8c737771a5c9e324ddf96b238d\": not found" containerID="a72740985dff56e5a8a38560334c30b0e9547b8c737771a5c9e324ddf96b238d" Jun 20 18:32:38.090387 kubelet[3230]: I0620 18:32:38.090212 3230 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a72740985dff56e5a8a38560334c30b0e9547b8c737771a5c9e324ddf96b238d"} err="failed to get container status \"a72740985dff56e5a8a38560334c30b0e9547b8c737771a5c9e324ddf96b238d\": rpc error: code = NotFound desc = an error occurred when try to find container \"a72740985dff56e5a8a38560334c30b0e9547b8c737771a5c9e324ddf96b238d\": not found" Jun 20 18:32:38.090387 kubelet[3230]: I0620 18:32:38.090251 3230 scope.go:117] "RemoveContainer" containerID="072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901" Jun 20 18:32:38.090872 containerd[1963]: time="2025-06-20T18:32:38.090803393Z" level=error msg="ContainerStatus for \"072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901\": not found" Jun 20 18:32:38.091180 kubelet[3230]: E0620 18:32:38.091117 3230 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901\": not found" containerID="072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901" Jun 20 18:32:38.091254 kubelet[3230]: I0620 18:32:38.091160 3230 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901"} err="failed to get container status \"072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901\": rpc error: code = NotFound desc = an error occurred when try to find container \"072cd447054b7a7254f8eaafc7ce4426938b5a809ee3f1783a30116f35864901\": not found" Jun 20 18:32:38.091254 kubelet[3230]: I0620 18:32:38.091221 3230 scope.go:117] "RemoveContainer" containerID="ec99dc99952365f6217727c9da0e605109abcf655ecda296144cfccf8208ae5d" Jun 20 18:32:38.091671 containerd[1963]: time="2025-06-20T18:32:38.091592873Z" level=error msg="ContainerStatus for \"ec99dc99952365f6217727c9da0e605109abcf655ecda296144cfccf8208ae5d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec99dc99952365f6217727c9da0e605109abcf655ecda296144cfccf8208ae5d\": not found" Jun 20 18:32:38.092067 kubelet[3230]: E0620 18:32:38.092028 3230 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec99dc99952365f6217727c9da0e605109abcf655ecda296144cfccf8208ae5d\": not found" containerID="ec99dc99952365f6217727c9da0e605109abcf655ecda296144cfccf8208ae5d" Jun 20 18:32:38.092163 kubelet[3230]: I0620 18:32:38.092076 3230 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec99dc99952365f6217727c9da0e605109abcf655ecda296144cfccf8208ae5d"} err="failed to get container status \"ec99dc99952365f6217727c9da0e605109abcf655ecda296144cfccf8208ae5d\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec99dc99952365f6217727c9da0e605109abcf655ecda296144cfccf8208ae5d\": not found" Jun 20 18:32:38.092163 kubelet[3230]: I0620 18:32:38.092107 3230 scope.go:117] "RemoveContainer" containerID="3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f" Jun 20 18:32:38.092445 containerd[1963]: time="2025-06-20T18:32:38.092393993Z" level=error msg="ContainerStatus for \"3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f\": not found" Jun 20 18:32:38.092860 kubelet[3230]: E0620 18:32:38.092667 3230 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f\": not found" containerID="3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f" Jun 20 18:32:38.092860 kubelet[3230]: I0620 18:32:38.092712 3230 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f"} err="failed to get container status \"3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f\": rpc error: code = NotFound desc = an error occurred when try to find container \"3908cd00a9acfc724a853a0eaef1739f6eb1b63bd91cacf0e7c8a3b3852c191f\": not found" Jun 20 18:32:38.092860 kubelet[3230]: I0620 18:32:38.092742 3230 scope.go:117] "RemoveContainer" containerID="66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2" Jun 20 18:32:38.094435 containerd[1963]: time="2025-06-20T18:32:38.094375925Z" level=info msg="RemoveContainer for \"66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2\"" Jun 20 18:32:38.100677 containerd[1963]: time="2025-06-20T18:32:38.100545437Z" level=info msg="RemoveContainer for \"66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2\" returns successfully" Jun 20 18:32:38.100904 kubelet[3230]: I0620 18:32:38.100859 3230 scope.go:117] "RemoveContainer" containerID="66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2" Jun 20 18:32:38.101265 containerd[1963]: time="2025-06-20T18:32:38.101196101Z" level=error msg="ContainerStatus for \"66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2\": not found" Jun 20 18:32:38.101584 kubelet[3230]: E0620 18:32:38.101488 3230 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2\": not found" containerID="66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2" Jun 20 18:32:38.101708 kubelet[3230]: I0620 18:32:38.101597 3230 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2"} err="failed to get container status \"66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2\": rpc error: code = NotFound desc = an error occurred when try to find container \"66ae813c49ec6bb27019fa9d78cc35f09e7a4152499e2d541fc82c01a3a29ed2\": not found" Jun 20 18:32:38.525252 kubelet[3230]: E0620 18:32:38.525153 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-mls9r" podUID="95d48e29-4e87-4e1d-8bae-ce05b9adbe7b" Jun 20 18:32:38.805109 sshd[5050]: Connection closed by 147.75.109.163 port 35460 Jun 20 18:32:38.804883 sshd-session[5048]: pam_unix(sshd:session): session closed for user core Jun 20 18:32:38.812048 systemd[1]: sshd@24-172.31.22.87:22-147.75.109.163:35460.service: Deactivated successfully. Jun 20 18:32:38.816954 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 18:32:38.818847 systemd[1]: session-25.scope: Consumed 1.203s CPU time, 23.3M memory peak. Jun 20 18:32:38.820146 systemd-logind[1929]: Session 25 logged out. Waiting for processes to exit. Jun 20 18:32:38.822573 systemd-logind[1929]: Removed session 25. Jun 20 18:32:38.844155 systemd[1]: Started sshd@25-172.31.22.87:22-147.75.109.163:49278.service - OpenSSH per-connection server daemon (147.75.109.163:49278). Jun 20 18:32:39.036283 ntpd[1922]: Deleting interface #12 lxc_health, fe80::a000:57ff:fe6c:b51f%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Jun 20 18:32:39.036865 ntpd[1922]: 20 Jun 18:32:39 ntpd[1922]: Deleting interface #12 lxc_health, fe80::a000:57ff:fe6c:b51f%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Jun 20 18:32:39.037803 sshd[5212]: Accepted publickey for core from 147.75.109.163 port 49278 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:32:39.040947 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:32:39.053072 systemd-logind[1929]: New session 26 of user core. Jun 20 18:32:39.059952 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 18:32:39.532718 kubelet[3230]: I0620 18:32:39.531695 3230 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01009017-aa32-43ee-845e-a0712813ae67" path="/var/lib/kubelet/pods/01009017-aa32-43ee-845e-a0712813ae67/volumes" Jun 20 18:32:39.534347 kubelet[3230]: I0620 18:32:39.534286 3230 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31f4d569-fdd1-43a2-847b-e897730d763a" path="/var/lib/kubelet/pods/31f4d569-fdd1-43a2-847b-e897730d763a/volumes" Jun 20 18:32:40.527174 kubelet[3230]: E0620 18:32:40.525769 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-mls9r" podUID="95d48e29-4e87-4e1d-8bae-ce05b9adbe7b" Jun 20 18:32:40.709718 kubelet[3230]: I0620 18:32:40.707215 3230 setters.go:618] "Node became not ready" node="ip-172-31-22-87" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T18:32:40Z","lastTransitionTime":"2025-06-20T18:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 20 18:32:41.320614 sshd[5214]: Connection closed by 147.75.109.163 port 49278 Jun 20 18:32:41.324754 sshd-session[5212]: pam_unix(sshd:session): session closed for user core Jun 20 18:32:41.332452 systemd[1]: sshd@25-172.31.22.87:22-147.75.109.163:49278.service: Deactivated successfully. Jun 20 18:32:41.343450 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 18:32:41.344083 systemd[1]: session-26.scope: Consumed 2.050s CPU time, 25.7M memory peak. Jun 20 18:32:41.355526 systemd-logind[1929]: Session 26 logged out. Waiting for processes to exit. Jun 20 18:32:41.386197 systemd[1]: Started sshd@26-172.31.22.87:22-147.75.109.163:49288.service - OpenSSH per-connection server daemon (147.75.109.163:49288). Jun 20 18:32:41.389723 systemd-logind[1929]: Removed session 26. Jun 20 18:32:41.418534 systemd[1]: Created slice kubepods-burstable-pod84628ca9_9eed_4b02_99c3_8ef1e2a2b7f9.slice - libcontainer container kubepods-burstable-pod84628ca9_9eed_4b02_99c3_8ef1e2a2b7f9.slice. Jun 20 18:32:41.499183 kubelet[3230]: I0620 18:32:41.499090 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9-clustermesh-secrets\") pod \"cilium-b5x66\" (UID: \"84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9\") " pod="kube-system/cilium-b5x66" Jun 20 18:32:41.499183 kubelet[3230]: I0620 18:32:41.499181 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9-cilium-config-path\") pod \"cilium-b5x66\" (UID: \"84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9\") " pod="kube-system/cilium-b5x66" Jun 20 18:32:41.499474 kubelet[3230]: I0620 18:32:41.499224 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9-hostproc\") pod \"cilium-b5x66\" (UID: \"84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9\") " pod="kube-system/cilium-b5x66" Jun 20 18:32:41.499474 kubelet[3230]: I0620 18:32:41.499261 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9-cni-path\") pod \"cilium-b5x66\" (UID: \"84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9\") " pod="kube-system/cilium-b5x66" Jun 20 18:32:41.500305 kubelet[3230]: I0620 18:32:41.499681 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9-etc-cni-netd\") pod \"cilium-b5x66\" (UID: \"84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9\") " pod="kube-system/cilium-b5x66" Jun 20 18:32:41.500305 kubelet[3230]: I0620 18:32:41.499736 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9-cilium-ipsec-secrets\") pod \"cilium-b5x66\" (UID: \"84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9\") " pod="kube-system/cilium-b5x66" Jun 20 18:32:41.500305 kubelet[3230]: I0620 18:32:41.499795 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9-bpf-maps\") pod \"cilium-b5x66\" (UID: \"84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9\") " pod="kube-system/cilium-b5x66" Jun 20 18:32:41.500305 kubelet[3230]: I0620 18:32:41.499833 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg4mx\" (UniqueName: \"kubernetes.io/projected/84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9-kube-api-access-gg4mx\") pod \"cilium-b5x66\" (UID: \"84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9\") " pod="kube-system/cilium-b5x66" Jun 20 18:32:41.500305 kubelet[3230]: I0620 18:32:41.499886 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9-cilium-cgroup\") pod \"cilium-b5x66\" (UID: \"84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9\") " pod="kube-system/cilium-b5x66" Jun 20 18:32:41.500305 kubelet[3230]: I0620 18:32:41.499941 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9-xtables-lock\") pod \"cilium-b5x66\" (UID: \"84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9\") " pod="kube-system/cilium-b5x66" Jun 20 18:32:41.500662 kubelet[3230]: I0620 18:32:41.499979 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9-host-proc-sys-net\") pod \"cilium-b5x66\" (UID: \"84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9\") " pod="kube-system/cilium-b5x66" Jun 20 18:32:41.500662 kubelet[3230]: I0620 18:32:41.500015 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9-cilium-run\") pod \"cilium-b5x66\" (UID: \"84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9\") " pod="kube-system/cilium-b5x66" Jun 20 18:32:41.500662 kubelet[3230]: I0620 18:32:41.500061 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9-host-proc-sys-kernel\") pod \"cilium-b5x66\" (UID: \"84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9\") " pod="kube-system/cilium-b5x66" Jun 20 18:32:41.500662 kubelet[3230]: I0620 18:32:41.500095 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9-hubble-tls\") pod \"cilium-b5x66\" (UID: \"84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9\") " pod="kube-system/cilium-b5x66" Jun 20 18:32:41.500662 kubelet[3230]: I0620 18:32:41.500133 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9-lib-modules\") pod \"cilium-b5x66\" (UID: \"84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9\") " pod="kube-system/cilium-b5x66" Jun 20 18:32:41.626474 sshd[5224]: Accepted publickey for core from 147.75.109.163 port 49288 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:32:41.629823 sshd-session[5224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:32:41.688260 systemd-logind[1929]: New session 27 of user core. Jun 20 18:32:41.695099 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 20 18:32:41.731611 containerd[1963]: time="2025-06-20T18:32:41.731483879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b5x66,Uid:84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9,Namespace:kube-system,Attempt:0,}" Jun 20 18:32:41.777377 containerd[1963]: time="2025-06-20T18:32:41.776867963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:32:41.777377 containerd[1963]: time="2025-06-20T18:32:41.776985983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:32:41.777377 containerd[1963]: time="2025-06-20T18:32:41.777022703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:32:41.777377 containerd[1963]: time="2025-06-20T18:32:41.777256427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:32:41.813030 systemd[1]: Started cri-containerd-27fd29df8837be9573650c9c364fcf8b7cb0a460512588f2cbbc1e24143f0699.scope - libcontainer container 27fd29df8837be9573650c9c364fcf8b7cb0a460512588f2cbbc1e24143f0699. Jun 20 18:32:41.827613 sshd[5231]: Connection closed by 147.75.109.163 port 49288 Jun 20 18:32:41.829017 sshd-session[5224]: pam_unix(sshd:session): session closed for user core Jun 20 18:32:41.837245 systemd[1]: sshd@26-172.31.22.87:22-147.75.109.163:49288.service: Deactivated successfully. Jun 20 18:32:41.845613 systemd[1]: session-27.scope: Deactivated successfully. Jun 20 18:32:41.850573 systemd-logind[1929]: Session 27 logged out. Waiting for processes to exit. Jun 20 18:32:41.853075 update_engine[1931]: I20250620 18:32:41.852676 1931 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:32:41.853075 update_engine[1931]: I20250620 18:32:41.853026 1931 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:32:41.853966 update_engine[1931]: I20250620 18:32:41.853367 1931 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:32:41.856250 update_engine[1931]: E20250620 18:32:41.855612 1931 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:32:41.856250 update_engine[1931]: I20250620 18:32:41.855880 1931 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jun 20 18:32:41.879920 systemd[1]: Started sshd@27-172.31.22.87:22-147.75.109.163:49294.service - OpenSSH per-connection server daemon (147.75.109.163:49294). Jun 20 18:32:41.883978 systemd-logind[1929]: Removed session 27. Jun 20 18:32:41.888708 containerd[1963]: time="2025-06-20T18:32:41.887751672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b5x66,Uid:84628ca9-9eed-4b02-99c3-8ef1e2a2b7f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"27fd29df8837be9573650c9c364fcf8b7cb0a460512588f2cbbc1e24143f0699\"" Jun 20 18:32:41.904761 containerd[1963]: time="2025-06-20T18:32:41.904528920Z" level=info msg="CreateContainer within sandbox \"27fd29df8837be9573650c9c364fcf8b7cb0a460512588f2cbbc1e24143f0699\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 18:32:41.931288 containerd[1963]: time="2025-06-20T18:32:41.931222272Z" level=info msg="CreateContainer within sandbox \"27fd29df8837be9573650c9c364fcf8b7cb0a460512588f2cbbc1e24143f0699\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5b954b9613b7241d5291c04e5f3252d9f2adb1f87fd1cbfe99b7464a3f50ec8f\"" Jun 20 18:32:41.932750 containerd[1963]: time="2025-06-20T18:32:41.932509848Z" level=info msg="StartContainer for \"5b954b9613b7241d5291c04e5f3252d9f2adb1f87fd1cbfe99b7464a3f50ec8f\"" Jun 20 18:32:41.981943 systemd[1]: Started cri-containerd-5b954b9613b7241d5291c04e5f3252d9f2adb1f87fd1cbfe99b7464a3f50ec8f.scope - libcontainer container 5b954b9613b7241d5291c04e5f3252d9f2adb1f87fd1cbfe99b7464a3f50ec8f. Jun 20 18:32:42.057509 containerd[1963]: time="2025-06-20T18:32:42.057339285Z" level=info msg="StartContainer for \"5b954b9613b7241d5291c04e5f3252d9f2adb1f87fd1cbfe99b7464a3f50ec8f\" returns successfully" Jun 20 18:32:42.073354 systemd[1]: cri-containerd-5b954b9613b7241d5291c04e5f3252d9f2adb1f87fd1cbfe99b7464a3f50ec8f.scope: Deactivated successfully. Jun 20 18:32:42.082122 sshd[5278]: Accepted publickey for core from 147.75.109.163 port 49294 ssh2: RSA SHA256:jqOXl21HUSxsI/+q94HFLSJ8H1GmL0DtTy/fnTl6lzY Jun 20 18:32:42.086342 sshd-session[5278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:32:42.099241 systemd-logind[1929]: New session 28 of user core. Jun 20 18:32:42.105923 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 20 18:32:42.145266 containerd[1963]: time="2025-06-20T18:32:42.144821733Z" level=info msg="shim disconnected" id=5b954b9613b7241d5291c04e5f3252d9f2adb1f87fd1cbfe99b7464a3f50ec8f namespace=k8s.io Jun 20 18:32:42.145266 containerd[1963]: time="2025-06-20T18:32:42.144984021Z" level=warning msg="cleaning up after shim disconnected" id=5b954b9613b7241d5291c04e5f3252d9f2adb1f87fd1cbfe99b7464a3f50ec8f namespace=k8s.io Jun 20 18:32:42.145266 containerd[1963]: time="2025-06-20T18:32:42.145006041Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:32:42.525884 kubelet[3230]: E0620 18:32:42.525464 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-mls9r" podUID="95d48e29-4e87-4e1d-8bae-ce05b9adbe7b" Jun 20 18:32:42.744260 kubelet[3230]: E0620 18:32:42.744113 3230 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 18:32:43.052542 containerd[1963]: time="2025-06-20T18:32:43.052480126Z" level=info msg="CreateContainer within sandbox \"27fd29df8837be9573650c9c364fcf8b7cb0a460512588f2cbbc1e24143f0699\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 18:32:43.086988 containerd[1963]: time="2025-06-20T18:32:43.086912002Z" level=info msg="CreateContainer within sandbox \"27fd29df8837be9573650c9c364fcf8b7cb0a460512588f2cbbc1e24143f0699\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d56f671b72a1307dbab8b5f00cc1a7e78f8767ca83dfdd93f28cfaeeda0f3a96\"" Jun 20 18:32:43.088768 containerd[1963]: time="2025-06-20T18:32:43.087781006Z" level=info msg="StartContainer for \"d56f671b72a1307dbab8b5f00cc1a7e78f8767ca83dfdd93f28cfaeeda0f3a96\"" Jun 20 18:32:43.191309 systemd[1]: Started cri-containerd-d56f671b72a1307dbab8b5f00cc1a7e78f8767ca83dfdd93f28cfaeeda0f3a96.scope - libcontainer container d56f671b72a1307dbab8b5f00cc1a7e78f8767ca83dfdd93f28cfaeeda0f3a96. Jun 20 18:32:43.272249 containerd[1963]: time="2025-06-20T18:32:43.272111807Z" level=info msg="StartContainer for \"d56f671b72a1307dbab8b5f00cc1a7e78f8767ca83dfdd93f28cfaeeda0f3a96\" returns successfully" Jun 20 18:32:43.284291 systemd[1]: cri-containerd-d56f671b72a1307dbab8b5f00cc1a7e78f8767ca83dfdd93f28cfaeeda0f3a96.scope: Deactivated successfully. Jun 20 18:32:43.330716 containerd[1963]: time="2025-06-20T18:32:43.330520031Z" level=info msg="shim disconnected" id=d56f671b72a1307dbab8b5f00cc1a7e78f8767ca83dfdd93f28cfaeeda0f3a96 namespace=k8s.io Jun 20 18:32:43.330716 containerd[1963]: time="2025-06-20T18:32:43.330595643Z" level=warning msg="cleaning up after shim disconnected" id=d56f671b72a1307dbab8b5f00cc1a7e78f8767ca83dfdd93f28cfaeeda0f3a96 namespace=k8s.io Jun 20 18:32:43.330716 containerd[1963]: time="2025-06-20T18:32:43.330616247Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:32:43.622423 systemd[1]: run-containerd-runc-k8s.io-d56f671b72a1307dbab8b5f00cc1a7e78f8767ca83dfdd93f28cfaeeda0f3a96-runc.aGX868.mount: Deactivated successfully. Jun 20 18:32:43.622850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d56f671b72a1307dbab8b5f00cc1a7e78f8767ca83dfdd93f28cfaeeda0f3a96-rootfs.mount: Deactivated successfully. Jun 20 18:32:44.056659 containerd[1963]: time="2025-06-20T18:32:44.056563895Z" level=info msg="CreateContainer within sandbox \"27fd29df8837be9573650c9c364fcf8b7cb0a460512588f2cbbc1e24143f0699\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 18:32:44.096136 containerd[1963]: time="2025-06-20T18:32:44.095994107Z" level=info msg="CreateContainer within sandbox \"27fd29df8837be9573650c9c364fcf8b7cb0a460512588f2cbbc1e24143f0699\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"61e3bcbaa03c956a796a71778701e5a28ea4a5683bee6de463263fce6580c1b5\"" Jun 20 18:32:44.097693 containerd[1963]: time="2025-06-20T18:32:44.097528691Z" level=info msg="StartContainer for \"61e3bcbaa03c956a796a71778701e5a28ea4a5683bee6de463263fce6580c1b5\"" Jun 20 18:32:44.159923 systemd[1]: Started cri-containerd-61e3bcbaa03c956a796a71778701e5a28ea4a5683bee6de463263fce6580c1b5.scope - libcontainer container 61e3bcbaa03c956a796a71778701e5a28ea4a5683bee6de463263fce6580c1b5. Jun 20 18:32:44.217914 containerd[1963]: time="2025-06-20T18:32:44.217791323Z" level=info msg="StartContainer for \"61e3bcbaa03c956a796a71778701e5a28ea4a5683bee6de463263fce6580c1b5\" returns successfully" Jun 20 18:32:44.222373 systemd[1]: cri-containerd-61e3bcbaa03c956a796a71778701e5a28ea4a5683bee6de463263fce6580c1b5.scope: Deactivated successfully. Jun 20 18:32:44.274784 containerd[1963]: time="2025-06-20T18:32:44.274682928Z" level=info msg="shim disconnected" id=61e3bcbaa03c956a796a71778701e5a28ea4a5683bee6de463263fce6580c1b5 namespace=k8s.io Jun 20 18:32:44.274784 containerd[1963]: time="2025-06-20T18:32:44.274770672Z" level=warning msg="cleaning up after shim disconnected" id=61e3bcbaa03c956a796a71778701e5a28ea4a5683bee6de463263fce6580c1b5 namespace=k8s.io Jun 20 18:32:44.275149 containerd[1963]: time="2025-06-20T18:32:44.274792872Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:32:44.525569 kubelet[3230]: E0620 18:32:44.525185 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-mls9r" podUID="95d48e29-4e87-4e1d-8bae-ce05b9adbe7b" Jun 20 18:32:44.623023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61e3bcbaa03c956a796a71778701e5a28ea4a5683bee6de463263fce6580c1b5-rootfs.mount: Deactivated successfully. Jun 20 18:32:45.064298 containerd[1963]: time="2025-06-20T18:32:45.064197744Z" level=info msg="CreateContainer within sandbox \"27fd29df8837be9573650c9c364fcf8b7cb0a460512588f2cbbc1e24143f0699\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 18:32:45.100258 containerd[1963]: time="2025-06-20T18:32:45.100008624Z" level=info msg="CreateContainer within sandbox \"27fd29df8837be9573650c9c364fcf8b7cb0a460512588f2cbbc1e24143f0699\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8a5eb7c337ee8b4818513edce9fbdb7eb2a2902e97cedc22bdbdbc18786ecfbe\"" Jun 20 18:32:45.102193 containerd[1963]: time="2025-06-20T18:32:45.102125004Z" level=info msg="StartContainer for \"8a5eb7c337ee8b4818513edce9fbdb7eb2a2902e97cedc22bdbdbc18786ecfbe\"" Jun 20 18:32:45.158946 systemd[1]: Started cri-containerd-8a5eb7c337ee8b4818513edce9fbdb7eb2a2902e97cedc22bdbdbc18786ecfbe.scope - libcontainer container 8a5eb7c337ee8b4818513edce9fbdb7eb2a2902e97cedc22bdbdbc18786ecfbe. Jun 20 18:32:45.202964 systemd[1]: cri-containerd-8a5eb7c337ee8b4818513edce9fbdb7eb2a2902e97cedc22bdbdbc18786ecfbe.scope: Deactivated successfully. Jun 20 18:32:45.208260 containerd[1963]: time="2025-06-20T18:32:45.208179672Z" level=info msg="StartContainer for \"8a5eb7c337ee8b4818513edce9fbdb7eb2a2902e97cedc22bdbdbc18786ecfbe\" returns successfully" Jun 20 18:32:45.251922 containerd[1963]: time="2025-06-20T18:32:45.251710753Z" level=info msg="shim disconnected" id=8a5eb7c337ee8b4818513edce9fbdb7eb2a2902e97cedc22bdbdbc18786ecfbe namespace=k8s.io Jun 20 18:32:45.251922 containerd[1963]: time="2025-06-20T18:32:45.251865145Z" level=warning msg="cleaning up after shim disconnected" id=8a5eb7c337ee8b4818513edce9fbdb7eb2a2902e97cedc22bdbdbc18786ecfbe namespace=k8s.io Jun 20 18:32:45.252433 containerd[1963]: time="2025-06-20T18:32:45.251889661Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:32:45.623812 systemd[1]: run-containerd-runc-k8s.io-8a5eb7c337ee8b4818513edce9fbdb7eb2a2902e97cedc22bdbdbc18786ecfbe-runc.UMn1id.mount: Deactivated successfully. Jun 20 18:32:45.624058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a5eb7c337ee8b4818513edce9fbdb7eb2a2902e97cedc22bdbdbc18786ecfbe-rootfs.mount: Deactivated successfully. Jun 20 18:32:46.075335 containerd[1963]: time="2025-06-20T18:32:46.075279409Z" level=info msg="CreateContainer within sandbox \"27fd29df8837be9573650c9c364fcf8b7cb0a460512588f2cbbc1e24143f0699\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 18:32:46.113875 containerd[1963]: time="2025-06-20T18:32:46.113804281Z" level=info msg="CreateContainer within sandbox \"27fd29df8837be9573650c9c364fcf8b7cb0a460512588f2cbbc1e24143f0699\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e8f09cdd58a02ca13bd197df5dbf4b0840430b1f8188510429eab817ab5cbf0d\"" Jun 20 18:32:46.115574 containerd[1963]: time="2025-06-20T18:32:46.115490401Z" level=info msg="StartContainer for \"e8f09cdd58a02ca13bd197df5dbf4b0840430b1f8188510429eab817ab5cbf0d\"" Jun 20 18:32:46.202182 systemd[1]: Started cri-containerd-e8f09cdd58a02ca13bd197df5dbf4b0840430b1f8188510429eab817ab5cbf0d.scope - libcontainer container e8f09cdd58a02ca13bd197df5dbf4b0840430b1f8188510429eab817ab5cbf0d. Jun 20 18:32:46.282192 containerd[1963]: time="2025-06-20T18:32:46.282082598Z" level=info msg="StartContainer for \"e8f09cdd58a02ca13bd197df5dbf4b0840430b1f8188510429eab817ab5cbf0d\" returns successfully" Jun 20 18:32:46.525485 kubelet[3230]: E0620 18:32:46.525307 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-mls9r" podUID="95d48e29-4e87-4e1d-8bae-ce05b9adbe7b" Jun 20 18:32:47.123689 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jun 20 18:32:47.530419 containerd[1963]: time="2025-06-20T18:32:47.530133016Z" level=info msg="StopPodSandbox for \"25805cbb117706ca06893d4e0415bfa68ded4f2f0670588836e1147bc631b10c\"" Jun 20 18:32:47.530419 containerd[1963]: time="2025-06-20T18:32:47.530275468Z" level=info msg="TearDown network for sandbox \"25805cbb117706ca06893d4e0415bfa68ded4f2f0670588836e1147bc631b10c\" successfully" Jun 20 18:32:47.530419 containerd[1963]: time="2025-06-20T18:32:47.530296792Z" level=info msg="StopPodSandbox for \"25805cbb117706ca06893d4e0415bfa68ded4f2f0670588836e1147bc631b10c\" returns successfully" Jun 20 18:32:47.531675 containerd[1963]: time="2025-06-20T18:32:47.531592624Z" level=info msg="RemovePodSandbox for \"25805cbb117706ca06893d4e0415bfa68ded4f2f0670588836e1147bc631b10c\"" Jun 20 18:32:47.531814 containerd[1963]: time="2025-06-20T18:32:47.531680644Z" level=info msg="Forcibly stopping sandbox \"25805cbb117706ca06893d4e0415bfa68ded4f2f0670588836e1147bc631b10c\"" Jun 20 18:32:47.531814 containerd[1963]: time="2025-06-20T18:32:47.531784720Z" level=info msg="TearDown network for sandbox \"25805cbb117706ca06893d4e0415bfa68ded4f2f0670588836e1147bc631b10c\" successfully" Jun 20 18:32:47.538120 containerd[1963]: time="2025-06-20T18:32:47.538037992Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25805cbb117706ca06893d4e0415bfa68ded4f2f0670588836e1147bc631b10c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 18:32:47.538350 containerd[1963]: time="2025-06-20T18:32:47.538141312Z" level=info msg="RemovePodSandbox \"25805cbb117706ca06893d4e0415bfa68ded4f2f0670588836e1147bc631b10c\" returns successfully" Jun 20 18:32:47.539040 containerd[1963]: time="2025-06-20T18:32:47.538878304Z" level=info msg="StopPodSandbox for \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\"" Jun 20 18:32:47.539040 containerd[1963]: time="2025-06-20T18:32:47.539025304Z" level=info msg="TearDown network for sandbox \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\" successfully" Jun 20 18:32:47.539443 containerd[1963]: time="2025-06-20T18:32:47.539050468Z" level=info msg="StopPodSandbox for \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\" returns successfully" Jun 20 18:32:47.540758 containerd[1963]: time="2025-06-20T18:32:47.539801848Z" level=info msg="RemovePodSandbox for \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\"" Jun 20 18:32:47.540758 containerd[1963]: time="2025-06-20T18:32:47.539851360Z" level=info msg="Forcibly stopping sandbox \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\"" Jun 20 18:32:47.540758 containerd[1963]: time="2025-06-20T18:32:47.539952724Z" level=info msg="TearDown network for sandbox \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\" successfully" Jun 20 18:32:47.547298 containerd[1963]: time="2025-06-20T18:32:47.547082260Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 18:32:47.547298 containerd[1963]: time="2025-06-20T18:32:47.547189000Z" level=info msg="RemovePodSandbox \"c71903837b7291f15051180510f4a0e4262fe65583e5dda822bb6dbd119649c9\" returns successfully" Jun 20 18:32:51.401799 systemd-networkd[1860]: lxc_health: Link UP Jun 20 18:32:51.413361 (udev-worker)[6084]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:32:51.424409 systemd-networkd[1860]: lxc_health: Gained carrier Jun 20 18:32:51.778878 kubelet[3230]: I0620 18:32:51.778669 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b5x66" podStartSLOduration=10.778648017 podStartE2EDuration="10.778648017s" podCreationTimestamp="2025-06-20 18:32:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:32:47.120101846 +0000 UTC m=+119.883100989" watchObservedRunningTime="2025-06-20 18:32:51.778648017 +0000 UTC m=+124.541646968" Jun 20 18:32:51.855871 update_engine[1931]: I20250620 18:32:51.855775 1931 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:32:51.856398 update_engine[1931]: I20250620 18:32:51.856133 1931 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:32:51.856669 update_engine[1931]: I20250620 18:32:51.856506 1931 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:32:51.857847 update_engine[1931]: E20250620 18:32:51.857766 1931 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:32:51.858009 update_engine[1931]: I20250620 18:32:51.857928 1931 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 20 18:32:51.858009 update_engine[1931]: I20250620 18:32:51.857950 1931 omaha_request_action.cc:617] Omaha request response: Jun 20 18:32:51.858118 update_engine[1931]: E20250620 18:32:51.858069 1931 omaha_request_action.cc:636] Omaha request network transfer failed. Jun 20 18:32:51.858118 update_engine[1931]: I20250620 18:32:51.858101 1931 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jun 20 18:32:51.858213 update_engine[1931]: I20250620 18:32:51.858119 1931 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 18:32:51.858213 update_engine[1931]: I20250620 18:32:51.858134 1931 update_attempter.cc:306] Processing Done. Jun 20 18:32:51.858213 update_engine[1931]: E20250620 18:32:51.858161 1931 update_attempter.cc:619] Update failed. Jun 20 18:32:51.858213 update_engine[1931]: I20250620 18:32:51.858177 1931 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jun 20 18:32:51.858213 update_engine[1931]: I20250620 18:32:51.858193 1931 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jun 20 18:32:51.858458 update_engine[1931]: I20250620 18:32:51.858209 1931 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jun 20 18:32:51.858458 update_engine[1931]: I20250620 18:32:51.858318 1931 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 20 18:32:51.858458 update_engine[1931]: I20250620 18:32:51.858354 1931 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 20 18:32:51.858458 update_engine[1931]: I20250620 18:32:51.858402 1931 omaha_request_action.cc:272] Request: Jun 20 18:32:51.858458 update_engine[1931]: Jun 20 18:32:51.858458 update_engine[1931]: Jun 20 18:32:51.858458 update_engine[1931]: Jun 20 18:32:51.858458 update_engine[1931]: Jun 20 18:32:51.858458 update_engine[1931]: Jun 20 18:32:51.858458 update_engine[1931]: Jun 20 18:32:51.858458 update_engine[1931]: I20250620 18:32:51.858421 1931 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:32:51.858985 update_engine[1931]: I20250620 18:32:51.858755 1931 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:32:51.859687 update_engine[1931]: I20250620 18:32:51.859185 1931 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:32:51.859687 update_engine[1931]: E20250620 18:32:51.859547 1931 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:32:51.859687 update_engine[1931]: I20250620 18:32:51.859654 1931 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 20 18:32:51.859687 update_engine[1931]: I20250620 18:32:51.859680 1931 omaha_request_action.cc:617] Omaha request response: Jun 20 18:32:51.859939 update_engine[1931]: I20250620 18:32:51.859699 1931 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 18:32:51.859939 update_engine[1931]: I20250620 18:32:51.859715 1931 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 18:32:51.859939 update_engine[1931]: I20250620 18:32:51.859730 1931 update_attempter.cc:306] Processing Done. Jun 20 18:32:51.859939 update_engine[1931]: I20250620 18:32:51.859747 1931 update_attempter.cc:310] Error event sent. Jun 20 18:32:51.859939 update_engine[1931]: I20250620 18:32:51.859768 1931 update_check_scheduler.cc:74] Next update check in 41m34s Jun 20 18:32:51.861690 locksmithd[1978]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jun 20 18:32:51.861690 locksmithd[1978]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jun 20 18:32:52.945877 systemd-networkd[1860]: lxc_health: Gained IPv6LL Jun 20 18:32:55.036536 ntpd[1922]: Listen normally on 15 lxc_health [fe80::5ca6:b0ff:fee1:4f75%14]:123 Jun 20 18:32:55.037113 ntpd[1922]: 20 Jun 18:32:55 ntpd[1922]: Listen normally on 15 lxc_health [fe80::5ca6:b0ff:fee1:4f75%14]:123 Jun 20 18:32:58.024678 sshd[5329]: Connection closed by 147.75.109.163 port 49294 Jun 20 18:32:58.025530 sshd-session[5278]: pam_unix(sshd:session): session closed for user core Jun 20 18:32:58.032837 systemd[1]: sshd@27-172.31.22.87:22-147.75.109.163:49294.service: Deactivated successfully. Jun 20 18:32:58.041333 systemd[1]: session-28.scope: Deactivated successfully. Jun 20 18:32:58.049519 systemd-logind[1929]: Session 28 logged out. Waiting for processes to exit. Jun 20 18:32:58.053461 systemd-logind[1929]: Removed session 28. Jun 20 18:33:12.477137 systemd[1]: cri-containerd-88b5ba4fb92ba4eb78d9558e07b765d595bcc5530a61a372bde68b9b3ccd92a0.scope: Deactivated successfully. Jun 20 18:33:12.478982 systemd[1]: cri-containerd-88b5ba4fb92ba4eb78d9558e07b765d595bcc5530a61a372bde68b9b3ccd92a0.scope: Consumed 4.598s CPU time, 57.4M memory peak. Jun 20 18:33:12.527375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88b5ba4fb92ba4eb78d9558e07b765d595bcc5530a61a372bde68b9b3ccd92a0-rootfs.mount: Deactivated successfully. Jun 20 18:33:12.537941 containerd[1963]: time="2025-06-20T18:33:12.537794308Z" level=info msg="shim disconnected" id=88b5ba4fb92ba4eb78d9558e07b765d595bcc5530a61a372bde68b9b3ccd92a0 namespace=k8s.io Jun 20 18:33:12.539201 containerd[1963]: time="2025-06-20T18:33:12.538443448Z" level=warning msg="cleaning up after shim disconnected" id=88b5ba4fb92ba4eb78d9558e07b765d595bcc5530a61a372bde68b9b3ccd92a0 namespace=k8s.io Jun 20 18:33:12.539201 containerd[1963]: time="2025-06-20T18:33:12.538477228Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:33:13.155458 kubelet[3230]: I0620 18:33:13.155392 3230 scope.go:117] "RemoveContainer" containerID="88b5ba4fb92ba4eb78d9558e07b765d595bcc5530a61a372bde68b9b3ccd92a0" Jun 20 18:33:13.158595 containerd[1963]: time="2025-06-20T18:33:13.158521707Z" level=info msg="CreateContainer within sandbox \"13d691c55a5f2c4f98540ea070efb5f1d6eafb53105c8d6c39b97fcc233daf13\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 20 18:33:13.183661 containerd[1963]: time="2025-06-20T18:33:13.183589935Z" level=info msg="CreateContainer within sandbox \"13d691c55a5f2c4f98540ea070efb5f1d6eafb53105c8d6c39b97fcc233daf13\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"af58783d2539d5820a11a5e02dce7c33cd3e13c4befe4645fb4eb98562ac6825\"" Jun 20 18:33:13.184483 containerd[1963]: time="2025-06-20T18:33:13.184392519Z" level=info msg="StartContainer for \"af58783d2539d5820a11a5e02dce7c33cd3e13c4befe4645fb4eb98562ac6825\"" Jun 20 18:33:13.239954 systemd[1]: Started cri-containerd-af58783d2539d5820a11a5e02dce7c33cd3e13c4befe4645fb4eb98562ac6825.scope - libcontainer container af58783d2539d5820a11a5e02dce7c33cd3e13c4befe4645fb4eb98562ac6825. Jun 20 18:33:13.318360 containerd[1963]: time="2025-06-20T18:33:13.318089512Z" level=info msg="StartContainer for \"af58783d2539d5820a11a5e02dce7c33cd3e13c4befe4645fb4eb98562ac6825\" returns successfully" Jun 20 18:33:17.365566 systemd[1]: cri-containerd-ed75def976827e12c2cd05ea4c1023de3f8a619cd2f47f5bb0b89d6d6d176124.scope: Deactivated successfully. Jun 20 18:33:17.366701 systemd[1]: cri-containerd-ed75def976827e12c2cd05ea4c1023de3f8a619cd2f47f5bb0b89d6d6d176124.scope: Consumed 6.057s CPU time, 20.8M memory peak. Jun 20 18:33:17.412884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed75def976827e12c2cd05ea4c1023de3f8a619cd2f47f5bb0b89d6d6d176124-rootfs.mount: Deactivated successfully. Jun 20 18:33:17.426866 containerd[1963]: time="2025-06-20T18:33:17.426739868Z" level=info msg="shim disconnected" id=ed75def976827e12c2cd05ea4c1023de3f8a619cd2f47f5bb0b89d6d6d176124 namespace=k8s.io Jun 20 18:33:17.427766 containerd[1963]: time="2025-06-20T18:33:17.426889292Z" level=warning msg="cleaning up after shim disconnected" id=ed75def976827e12c2cd05ea4c1023de3f8a619cd2f47f5bb0b89d6d6d176124 namespace=k8s.io Jun 20 18:33:17.427766 containerd[1963]: time="2025-06-20T18:33:17.426912836Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:33:18.174696 kubelet[3230]: I0620 18:33:18.174266 3230 scope.go:117] "RemoveContainer" containerID="ed75def976827e12c2cd05ea4c1023de3f8a619cd2f47f5bb0b89d6d6d176124" Jun 20 18:33:18.178894 containerd[1963]: time="2025-06-20T18:33:18.178605764Z" level=info msg="CreateContainer within sandbox \"a901066bd062792d97c78a98ca408d22050bce0a7ca92d1bef1635ec39289a79\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 20 18:33:18.214305 containerd[1963]: time="2025-06-20T18:33:18.214214288Z" level=info msg="CreateContainer within sandbox \"a901066bd062792d97c78a98ca408d22050bce0a7ca92d1bef1635ec39289a79\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"7fd57dfd7a9c111e92aa4936b4f61055f86362c3c24da061be8152d995e3774f\"" Jun 20 18:33:18.215236 containerd[1963]: time="2025-06-20T18:33:18.215096936Z" level=info msg="StartContainer for \"7fd57dfd7a9c111e92aa4936b4f61055f86362c3c24da061be8152d995e3774f\"" Jun 20 18:33:18.273936 systemd[1]: Started cri-containerd-7fd57dfd7a9c111e92aa4936b4f61055f86362c3c24da061be8152d995e3774f.scope - libcontainer container 7fd57dfd7a9c111e92aa4936b4f61055f86362c3c24da061be8152d995e3774f. Jun 20 18:33:18.343001 containerd[1963]: time="2025-06-20T18:33:18.342857073Z" level=info msg="StartContainer for \"7fd57dfd7a9c111e92aa4936b4f61055f86362c3c24da061be8152d995e3774f\" returns successfully" Jun 20 18:33:19.849946 kubelet[3230]: E0620 18:33:19.849861 3230 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-87?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"