Mar 17 17:24:27.224704 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 17 17:24:27.224749 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Mar 17 16:05:23 -00 2025 Mar 17 17:24:27.224773 kernel: KASLR disabled due to lack of seed Mar 17 17:24:27.224790 kernel: efi: EFI v2.7 by EDK II Mar 17 17:24:27.224805 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Mar 17 17:24:27.224821 kernel: secureboot: Secure boot disabled Mar 17 17:24:27.224838 kernel: ACPI: Early table checksum verification disabled Mar 17 17:24:27.224854 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 17 17:24:27.224870 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 17 17:24:27.224885 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 17 17:24:27.224905 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Mar 17 17:24:27.224921 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 17 17:24:27.224936 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 17 17:24:27.224951 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 17 17:24:27.224969 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 17 17:24:27.224989 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 17 17:24:27.225006 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 17 17:24:27.225022 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 17 17:24:27.225038 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 17 17:24:27.225054 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 17 17:24:27.225070 kernel: printk: bootconsole [uart0] enabled Mar 17 17:24:27.225085 kernel: NUMA: Failed to initialise from firmware Mar 17 17:24:27.225102 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 17:24:27.225118 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Mar 17 17:24:27.225134 kernel: Zone ranges: Mar 17 17:24:27.225150 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 17 17:24:27.225170 kernel: DMA32 empty Mar 17 17:24:27.225186 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 17 17:24:27.227260 kernel: Movable zone start for each node Mar 17 17:24:27.227311 kernel: Early memory node ranges Mar 17 17:24:27.227328 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 17 17:24:27.227345 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 17 17:24:27.227362 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 17 17:24:27.227397 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 17 17:24:27.227418 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 17 17:24:27.227434 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 17 17:24:27.227452 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 17 17:24:27.227468 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 17 17:24:27.227494 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 17:24:27.227512 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 17 17:24:27.227536 kernel: psci: probing for conduit method from ACPI. Mar 17 17:24:27.227553 kernel: psci: PSCIv1.0 detected in firmware. Mar 17 17:24:27.227570 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:24:27.227593 kernel: psci: Trusted OS migration not required Mar 17 17:24:27.227611 kernel: psci: SMC Calling Convention v1.1 Mar 17 17:24:27.227628 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:24:27.227645 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:24:27.227662 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 17:24:27.227679 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:24:27.227697 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:24:27.227713 kernel: CPU features: detected: Spectre-v2 Mar 17 17:24:27.227730 kernel: CPU features: detected: Spectre-v3a Mar 17 17:24:27.227747 kernel: CPU features: detected: Spectre-BHB Mar 17 17:24:27.227764 kernel: CPU features: detected: ARM erratum 1742098 Mar 17 17:24:27.227781 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 17 17:24:27.227803 kernel: alternatives: applying boot alternatives Mar 17 17:24:27.227825 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:24:27.227845 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:24:27.227862 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:24:27.227879 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:24:27.227897 kernel: Fallback order for Node 0: 0 Mar 17 17:24:27.227914 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 17 17:24:27.227931 kernel: Policy zone: Normal Mar 17 17:24:27.227948 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:24:27.227965 kernel: software IO TLB: area num 2. Mar 17 17:24:27.227986 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 17 17:24:27.228004 kernel: Memory: 3819896K/4030464K available (10240K kernel code, 2186K rwdata, 8100K rodata, 39744K init, 897K bss, 210568K reserved, 0K cma-reserved) Mar 17 17:24:27.228022 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:24:27.228039 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:24:27.228057 kernel: rcu: RCU event tracing is enabled. Mar 17 17:24:27.228074 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:24:27.228092 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:24:27.228110 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:24:27.228127 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:24:27.228144 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:24:27.228161 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:24:27.228183 kernel: GICv3: 96 SPIs implemented Mar 17 17:24:27.228200 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:24:27.228307 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:24:27.228325 kernel: GICv3: GICv3 features: 16 PPIs Mar 17 17:24:27.228342 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 17 17:24:27.228358 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 17 17:24:27.228376 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 17:24:27.228393 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Mar 17 17:24:27.228410 kernel: GICv3: using LPI property table @0x00000004000d0000 Mar 17 17:24:27.228428 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 17 17:24:27.228445 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Mar 17 17:24:27.228462 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:24:27.228486 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 17 17:24:27.228503 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 17 17:24:27.228521 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 17 17:24:27.228539 kernel: Console: colour dummy device 80x25 Mar 17 17:24:27.228557 kernel: printk: console [tty1] enabled Mar 17 17:24:27.228575 kernel: ACPI: Core revision 20230628 Mar 17 17:24:27.228593 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 17 17:24:27.228611 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:24:27.228629 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:24:27.228646 kernel: landlock: Up and running. Mar 17 17:24:27.228669 kernel: SELinux: Initializing. Mar 17 17:24:27.228686 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:24:27.228703 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:24:27.228721 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:24:27.228738 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:24:27.228756 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:24:27.228773 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:24:27.228791 kernel: Platform MSI: ITS@0x10080000 domain created Mar 17 17:24:27.228812 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 17 17:24:27.228830 kernel: Remapping and enabling EFI services. Mar 17 17:24:27.228847 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:24:27.228864 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:24:27.228882 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 17 17:24:27.228899 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Mar 17 17:24:27.228917 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 17 17:24:27.228934 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:24:27.228951 kernel: SMP: Total of 2 processors activated. Mar 17 17:24:27.228968 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:24:27.228990 kernel: CPU features: detected: 32-bit EL1 Support Mar 17 17:24:27.229008 kernel: CPU features: detected: CRC32 instructions Mar 17 17:24:27.229036 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:24:27.229059 kernel: alternatives: applying system-wide alternatives Mar 17 17:24:27.229076 kernel: devtmpfs: initialized Mar 17 17:24:27.229094 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:24:27.229113 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:24:27.229131 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:24:27.229149 kernel: SMBIOS 3.0.0 present. Mar 17 17:24:27.229171 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 17 17:24:27.229189 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:24:27.231281 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:24:27.231317 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:24:27.231338 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:24:27.231358 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:24:27.231394 kernel: audit: type=2000 audit(0.221:1): state=initialized audit_enabled=0 res=1 Mar 17 17:24:27.231430 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:24:27.231450 kernel: cpuidle: using governor menu Mar 17 17:24:27.231470 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:24:27.231489 kernel: ASID allocator initialised with 65536 entries Mar 17 17:24:27.231509 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:24:27.231528 kernel: Serial: AMBA PL011 UART driver Mar 17 17:24:27.231549 kernel: Modules: 17424 pages in range for non-PLT usage Mar 17 17:24:27.231568 kernel: Modules: 508944 pages in range for PLT usage Mar 17 17:24:27.231587 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:24:27.231612 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:24:27.231632 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:24:27.231651 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:24:27.231671 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:24:27.231690 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:24:27.231709 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:24:27.231728 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:24:27.231747 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:24:27.231766 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:24:27.231789 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:24:27.231810 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:24:27.231833 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:24:27.231852 kernel: ACPI: Interpreter enabled Mar 17 17:24:27.231871 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:24:27.231890 kernel: ACPI: MCFG table detected, 1 entries Mar 17 17:24:27.231909 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Mar 17 17:24:27.232409 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:24:27.232658 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 17:24:27.232862 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 17:24:27.233072 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Mar 17 17:24:27.235433 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Mar 17 17:24:27.235482 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 17 17:24:27.235502 kernel: acpiphp: Slot [1] registered Mar 17 17:24:27.235522 kernel: acpiphp: Slot [2] registered Mar 17 17:24:27.235541 kernel: acpiphp: Slot [3] registered Mar 17 17:24:27.235569 kernel: acpiphp: Slot [4] registered Mar 17 17:24:27.235588 kernel: acpiphp: Slot [5] registered Mar 17 17:24:27.235606 kernel: acpiphp: Slot [6] registered Mar 17 17:24:27.235624 kernel: acpiphp: Slot [7] registered Mar 17 17:24:27.235642 kernel: acpiphp: Slot [8] registered Mar 17 17:24:27.235660 kernel: acpiphp: Slot [9] registered Mar 17 17:24:27.235678 kernel: acpiphp: Slot [10] registered Mar 17 17:24:27.235696 kernel: acpiphp: Slot [11] registered Mar 17 17:24:27.235714 kernel: acpiphp: Slot [12] registered Mar 17 17:24:27.235732 kernel: acpiphp: Slot [13] registered Mar 17 17:24:27.235755 kernel: acpiphp: Slot [14] registered Mar 17 17:24:27.235773 kernel: acpiphp: Slot [15] registered Mar 17 17:24:27.235791 kernel: acpiphp: Slot [16] registered Mar 17 17:24:27.235809 kernel: acpiphp: Slot [17] registered Mar 17 17:24:27.235826 kernel: acpiphp: Slot [18] registered Mar 17 17:24:27.235844 kernel: acpiphp: Slot [19] registered Mar 17 17:24:27.235862 kernel: acpiphp: Slot [20] registered Mar 17 17:24:27.235880 kernel: acpiphp: Slot [21] registered Mar 17 17:24:27.235898 kernel: acpiphp: Slot [22] registered Mar 17 17:24:27.235921 kernel: acpiphp: Slot [23] registered Mar 17 17:24:27.235939 kernel: acpiphp: Slot [24] registered Mar 17 17:24:27.235957 kernel: acpiphp: Slot [25] registered Mar 17 17:24:27.235975 kernel: acpiphp: Slot [26] registered Mar 17 17:24:27.235993 kernel: acpiphp: Slot [27] registered Mar 17 17:24:27.236012 kernel: acpiphp: Slot [28] registered Mar 17 17:24:27.236030 kernel: acpiphp: Slot [29] registered Mar 17 17:24:27.236048 kernel: acpiphp: Slot [30] registered Mar 17 17:24:27.236066 kernel: acpiphp: Slot [31] registered Mar 17 17:24:27.236084 kernel: PCI host bridge to bus 0000:00 Mar 17 17:24:27.236391 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 17 17:24:27.236577 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 17:24:27.236759 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 17 17:24:27.236938 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Mar 17 17:24:27.237174 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 17 17:24:27.238469 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 17 17:24:27.238700 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 17 17:24:27.238917 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 17 17:24:27.239121 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 17 17:24:27.239356 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 17:24:27.239603 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 17 17:24:27.239871 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 17 17:24:27.240126 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 17 17:24:27.240369 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 17 17:24:27.240577 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 17:24:27.240818 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Mar 17 17:24:27.241042 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Mar 17 17:24:27.241305 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Mar 17 17:24:27.241519 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Mar 17 17:24:27.241730 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Mar 17 17:24:27.241931 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 17 17:24:27.242115 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 17:24:27.242346 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 17 17:24:27.242374 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 17:24:27.242393 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 17:24:27.242412 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 17:24:27.242431 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 17:24:27.242450 kernel: iommu: Default domain type: Translated Mar 17 17:24:27.242477 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:24:27.242496 kernel: efivars: Registered efivars operations Mar 17 17:24:27.242514 kernel: vgaarb: loaded Mar 17 17:24:27.242533 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:24:27.242551 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:24:27.242569 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:24:27.242588 kernel: pnp: PnP ACPI init Mar 17 17:24:27.246286 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 17 17:24:27.246339 kernel: pnp: PnP ACPI: found 1 devices Mar 17 17:24:27.246360 kernel: NET: Registered PF_INET protocol family Mar 17 17:24:27.246380 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:24:27.246400 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:24:27.246419 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:24:27.246438 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:24:27.246456 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:24:27.246475 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:24:27.246494 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:24:27.246518 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:24:27.246537 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:24:27.246556 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:24:27.246574 kernel: kvm [1]: HYP mode not available Mar 17 17:24:27.246593 kernel: Initialise system trusted keyrings Mar 17 17:24:27.246611 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:24:27.246630 kernel: Key type asymmetric registered Mar 17 17:24:27.246649 kernel: Asymmetric key parser 'x509' registered Mar 17 17:24:27.246668 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:24:27.246691 kernel: io scheduler mq-deadline registered Mar 17 17:24:27.246710 kernel: io scheduler kyber registered Mar 17 17:24:27.246729 kernel: io scheduler bfq registered Mar 17 17:24:27.246979 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 17 17:24:27.247008 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 17:24:27.247027 kernel: ACPI: button: Power Button [PWRB] Mar 17 17:24:27.247047 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 17 17:24:27.247065 kernel: ACPI: button: Sleep Button [SLPB] Mar 17 17:24:27.247090 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:24:27.247110 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 17 17:24:27.247395 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 17 17:24:27.247426 kernel: printk: console [ttyS0] disabled Mar 17 17:24:27.247446 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 17 17:24:27.247464 kernel: printk: console [ttyS0] enabled Mar 17 17:24:27.247483 kernel: printk: bootconsole [uart0] disabled Mar 17 17:24:27.247501 kernel: thunder_xcv, ver 1.0 Mar 17 17:24:27.247519 kernel: thunder_bgx, ver 1.0 Mar 17 17:24:27.247537 kernel: nicpf, ver 1.0 Mar 17 17:24:27.247562 kernel: nicvf, ver 1.0 Mar 17 17:24:27.247787 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:24:27.247990 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:24:26 UTC (1742232266) Mar 17 17:24:27.248016 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:24:27.248035 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 17 17:24:27.248054 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:24:27.248073 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:24:27.248098 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:24:27.248117 kernel: Segment Routing with IPv6 Mar 17 17:24:27.248135 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:24:27.248153 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:24:27.248171 kernel: Key type dns_resolver registered Mar 17 17:24:27.248189 kernel: registered taskstats version 1 Mar 17 17:24:27.252280 kernel: Loading compiled-in X.509 certificates Mar 17 17:24:27.252311 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 74c9b4f5dfad711856d7363c976664fc02c1e24c' Mar 17 17:24:27.252329 kernel: Key type .fscrypt registered Mar 17 17:24:27.252348 kernel: Key type fscrypt-provisioning registered Mar 17 17:24:27.252376 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:24:27.252395 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:24:27.252413 kernel: ima: No architecture policies found Mar 17 17:24:27.252431 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:24:27.252450 kernel: clk: Disabling unused clocks Mar 17 17:24:27.252468 kernel: Freeing unused kernel memory: 39744K Mar 17 17:24:27.252486 kernel: Run /init as init process Mar 17 17:24:27.252504 kernel: with arguments: Mar 17 17:24:27.252522 kernel: /init Mar 17 17:24:27.252544 kernel: with environment: Mar 17 17:24:27.252562 kernel: HOME=/ Mar 17 17:24:27.252580 kernel: TERM=linux Mar 17 17:24:27.252598 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:24:27.252622 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:24:27.252645 systemd[1]: Detected virtualization amazon. Mar 17 17:24:27.252666 systemd[1]: Detected architecture arm64. Mar 17 17:24:27.252689 systemd[1]: Running in initrd. Mar 17 17:24:27.252709 systemd[1]: No hostname configured, using default hostname. Mar 17 17:24:27.252729 systemd[1]: Hostname set to . Mar 17 17:24:27.252749 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:24:27.252768 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:24:27.252788 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:24:27.252808 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:24:27.252829 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:24:27.252854 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:24:27.252874 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:24:27.252894 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:24:27.252917 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:24:27.252938 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:24:27.252957 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:24:27.252977 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:24:27.253002 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:24:27.253022 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:24:27.253042 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:24:27.253061 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:24:27.253081 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:24:27.253101 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:24:27.253121 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:24:27.253141 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:24:27.253160 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:24:27.253186 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:24:27.253226 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:24:27.253251 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:24:27.253272 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:24:27.253292 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:24:27.253312 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:24:27.253332 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:24:27.253352 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:24:27.253379 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:24:27.253399 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:24:27.253420 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:24:27.253442 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:24:27.253462 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:24:27.253484 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:24:27.253556 systemd-journald[252]: Collecting audit messages is disabled. Mar 17 17:24:27.253601 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:27.253622 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:24:27.253648 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:24:27.253668 kernel: Bridge firewalling registered Mar 17 17:24:27.253688 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:24:27.253707 systemd-journald[252]: Journal started Mar 17 17:24:27.253745 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2033caf81b6ddb61217a75af514ca5) is 8.0M, max 75.3M, 67.3M free. Mar 17 17:24:27.200483 systemd-modules-load[253]: Inserted module 'overlay' Mar 17 17:24:27.258020 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:24:27.240811 systemd-modules-load[253]: Inserted module 'br_netfilter' Mar 17 17:24:27.259571 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:24:27.276800 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:24:27.283951 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:24:27.290492 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:24:27.319946 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:24:27.326288 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:24:27.329913 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:24:27.337314 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:24:27.359640 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:24:27.369501 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:24:27.384402 dracut-cmdline[289]: dracut-dracut-053 Mar 17 17:24:27.389447 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:24:27.473023 systemd-resolved[290]: Positive Trust Anchors: Mar 17 17:24:27.473076 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:24:27.473137 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:24:27.525243 kernel: SCSI subsystem initialized Mar 17 17:24:27.533239 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:24:27.545254 kernel: iscsi: registered transport (tcp) Mar 17 17:24:27.567301 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:24:27.567406 kernel: QLogic iSCSI HBA Driver Mar 17 17:24:27.680249 kernel: random: crng init done Mar 17 17:24:27.680608 systemd-resolved[290]: Defaulting to hostname 'linux'. Mar 17 17:24:27.686517 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:24:27.696146 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:24:27.704263 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:24:27.716538 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:24:27.752495 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:24:27.752589 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:24:27.752616 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:24:27.819253 kernel: raid6: neonx8 gen() 6739 MB/s Mar 17 17:24:27.836244 kernel: raid6: neonx4 gen() 6531 MB/s Mar 17 17:24:27.853244 kernel: raid6: neonx2 gen() 5434 MB/s Mar 17 17:24:27.870244 kernel: raid6: neonx1 gen() 3940 MB/s Mar 17 17:24:27.887244 kernel: raid6: int64x8 gen() 3826 MB/s Mar 17 17:24:27.904240 kernel: raid6: int64x4 gen() 3713 MB/s Mar 17 17:24:27.921259 kernel: raid6: int64x2 gen() 3594 MB/s Mar 17 17:24:27.939020 kernel: raid6: int64x1 gen() 2765 MB/s Mar 17 17:24:27.939055 kernel: raid6: using algorithm neonx8 gen() 6739 MB/s Mar 17 17:24:27.957019 kernel: raid6: .... xor() 4872 MB/s, rmw enabled Mar 17 17:24:27.957075 kernel: raid6: using neon recovery algorithm Mar 17 17:24:27.965578 kernel: xor: measuring software checksum speed Mar 17 17:24:27.965629 kernel: 8regs : 10962 MB/sec Mar 17 17:24:27.966689 kernel: 32regs : 11940 MB/sec Mar 17 17:24:27.967876 kernel: arm64_neon : 9513 MB/sec Mar 17 17:24:27.967919 kernel: xor: using function: 32regs (11940 MB/sec) Mar 17 17:24:28.052260 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:24:28.071180 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:24:28.081557 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:24:28.122350 systemd-udevd[472]: Using default interface naming scheme 'v255'. Mar 17 17:24:28.131052 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:24:28.151795 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:24:28.188000 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Mar 17 17:24:28.248356 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:24:28.264583 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:24:28.377065 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:24:28.392282 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:24:28.449395 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:24:28.452426 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:24:28.454878 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:24:28.457117 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:24:28.486673 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:24:28.531241 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:24:28.570580 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 17:24:28.570651 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 17 17:24:28.588517 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 17 17:24:28.589012 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 17 17:24:28.589875 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:99:b4:08:f8:29 Mar 17 17:24:28.584465 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:24:28.584690 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:24:28.588527 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:24:28.591765 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:24:28.592362 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:28.596312 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:24:28.613852 (udev-worker)[528]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:24:28.621785 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:24:28.634251 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 17 17:24:28.636286 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 17 17:24:28.645248 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 17 17:24:28.654238 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:24:28.654308 kernel: GPT:9289727 != 16777215 Mar 17 17:24:28.654333 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:24:28.654368 kernel: GPT:9289727 != 16777215 Mar 17 17:24:28.654393 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:24:28.656250 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:24:28.658834 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:28.668562 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:24:28.704281 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:24:28.779247 kernel: BTRFS: device fsid c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (530) Mar 17 17:24:28.794247 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (523) Mar 17 17:24:28.840926 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 17 17:24:28.899827 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 17 17:24:28.925497 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 17 17:24:28.941036 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 17 17:24:28.943676 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 17 17:24:28.961635 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:24:28.977417 disk-uuid[663]: Primary Header is updated. Mar 17 17:24:28.977417 disk-uuid[663]: Secondary Entries is updated. Mar 17 17:24:28.977417 disk-uuid[663]: Secondary Header is updated. Mar 17 17:24:28.989244 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:24:30.004256 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:24:30.005905 disk-uuid[664]: The operation has completed successfully. Mar 17 17:24:30.195420 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:24:30.195977 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:24:30.237485 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:24:30.244751 sh[925]: Success Mar 17 17:24:30.269242 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:24:30.393929 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:24:30.403446 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:24:30.408246 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:24:30.445258 kernel: BTRFS info (device dm-0): first mount of filesystem c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 Mar 17 17:24:30.445322 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:24:30.445348 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:24:30.445373 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:24:30.446570 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:24:30.562261 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 17 17:24:30.601449 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:24:30.605366 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:24:30.617448 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:24:30.623676 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:24:30.651489 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:24:30.651579 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:24:30.651613 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:24:30.659251 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:24:30.681291 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:24:30.681970 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:24:30.692048 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:24:30.706643 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:24:30.821222 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:24:30.836643 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:24:30.895316 systemd-networkd[1117]: lo: Link UP Mar 17 17:24:30.895338 systemd-networkd[1117]: lo: Gained carrier Mar 17 17:24:30.899325 systemd-networkd[1117]: Enumeration completed Mar 17 17:24:30.900943 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:24:30.900950 systemd-networkd[1117]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:24:30.902157 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:24:30.908883 systemd[1]: Reached target network.target - Network. Mar 17 17:24:30.917973 systemd-networkd[1117]: eth0: Link UP Mar 17 17:24:30.917988 systemd-networkd[1117]: eth0: Gained carrier Mar 17 17:24:30.918005 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:24:30.932291 systemd-networkd[1117]: eth0: DHCPv4 address 172.31.30.87/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 17:24:31.109922 ignition[1026]: Ignition 2.20.0 Mar 17 17:24:31.109950 ignition[1026]: Stage: fetch-offline Mar 17 17:24:31.110411 ignition[1026]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:24:31.114873 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:24:31.110436 ignition[1026]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:24:31.111150 ignition[1026]: Ignition finished successfully Mar 17 17:24:31.134633 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:24:31.156983 ignition[1126]: Ignition 2.20.0 Mar 17 17:24:31.157005 ignition[1126]: Stage: fetch Mar 17 17:24:31.157618 ignition[1126]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:24:31.157642 ignition[1126]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:24:31.158001 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:24:31.167735 ignition[1126]: PUT result: OK Mar 17 17:24:31.170627 ignition[1126]: parsed url from cmdline: "" Mar 17 17:24:31.170757 ignition[1126]: no config URL provided Mar 17 17:24:31.170777 ignition[1126]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:24:31.170802 ignition[1126]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:24:31.170848 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:24:31.172414 ignition[1126]: PUT result: OK Mar 17 17:24:31.172494 ignition[1126]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 17 17:24:31.178346 ignition[1126]: GET result: OK Mar 17 17:24:31.178486 ignition[1126]: parsing config with SHA512: ba11fd7450316bebbecdb7711b16526fa6c74ae77c9c68daaacc29774e9323e65a4cb842d5d426f50af91597bca96263a8364a5a67e17b4e354984e6df9264a5 Mar 17 17:24:31.187717 unknown[1126]: fetched base config from "system" Mar 17 17:24:31.188682 ignition[1126]: fetch: fetch complete Mar 17 17:24:31.187733 unknown[1126]: fetched base config from "system" Mar 17 17:24:31.188694 ignition[1126]: fetch: fetch passed Mar 17 17:24:31.187746 unknown[1126]: fetched user config from "aws" Mar 17 17:24:31.188780 ignition[1126]: Ignition finished successfully Mar 17 17:24:31.201308 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:24:31.210670 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:24:31.238433 ignition[1132]: Ignition 2.20.0 Mar 17 17:24:31.238462 ignition[1132]: Stage: kargs Mar 17 17:24:31.239517 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:24:31.239546 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:24:31.239704 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:24:31.242804 ignition[1132]: PUT result: OK Mar 17 17:24:31.251116 ignition[1132]: kargs: kargs passed Mar 17 17:24:31.251279 ignition[1132]: Ignition finished successfully Mar 17 17:24:31.256134 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:24:31.271111 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:24:31.294720 ignition[1138]: Ignition 2.20.0 Mar 17 17:24:31.295295 ignition[1138]: Stage: disks Mar 17 17:24:31.295901 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:24:31.295925 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:24:31.296097 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:24:31.299017 ignition[1138]: PUT result: OK Mar 17 17:24:31.308855 ignition[1138]: disks: disks passed Mar 17 17:24:31.309017 ignition[1138]: Ignition finished successfully Mar 17 17:24:31.313618 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:24:31.318689 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:24:31.321134 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:24:31.327863 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:24:31.329803 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:24:31.347308 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:24:31.356643 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:24:31.406705 systemd-fsck[1146]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:24:31.412933 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:24:31.421380 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:24:31.513240 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 6b579bf2-7716-4d59-98eb-b92ea668693e r/w with ordered data mode. Quota mode: none. Mar 17 17:24:31.513938 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:24:31.517562 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:24:31.542397 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:24:31.548469 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:24:31.550823 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:24:31.550903 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:24:31.550954 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:24:31.575240 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1165) Mar 17 17:24:31.580809 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:24:31.580878 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:24:31.580906 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:24:31.582094 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:24:31.598602 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:24:31.604386 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:24:31.607761 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:24:32.080635 initrd-setup-root[1189]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:24:32.100536 initrd-setup-root[1196]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:24:32.109374 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:24:32.117030 initrd-setup-root[1210]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:24:32.457449 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:24:32.472894 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:24:32.478361 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:24:32.495178 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:24:32.502264 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:24:32.538271 ignition[1277]: INFO : Ignition 2.20.0 Mar 17 17:24:32.538271 ignition[1277]: INFO : Stage: mount Mar 17 17:24:32.538271 ignition[1277]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:24:32.538271 ignition[1277]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:24:32.538271 ignition[1277]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:24:32.556333 ignition[1277]: INFO : PUT result: OK Mar 17 17:24:32.562336 ignition[1277]: INFO : mount: mount passed Mar 17 17:24:32.564078 ignition[1277]: INFO : Ignition finished successfully Mar 17 17:24:32.568267 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:24:32.572178 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:24:32.590453 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:24:32.611472 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:24:32.640857 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1289) Mar 17 17:24:32.640920 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:24:32.642537 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:24:32.643740 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:24:32.649244 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:24:32.652195 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:24:32.691801 ignition[1306]: INFO : Ignition 2.20.0 Mar 17 17:24:32.691801 ignition[1306]: INFO : Stage: files Mar 17 17:24:32.695100 ignition[1306]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:24:32.695100 ignition[1306]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:24:32.695100 ignition[1306]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:24:32.701786 ignition[1306]: INFO : PUT result: OK Mar 17 17:24:32.706630 ignition[1306]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:24:32.710261 ignition[1306]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:24:32.710261 ignition[1306]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:24:32.737807 ignition[1306]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:24:32.740552 ignition[1306]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:24:32.740552 ignition[1306]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:24:32.739187 unknown[1306]: wrote ssh authorized keys file for user: core Mar 17 17:24:32.748329 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:24:32.748329 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 17:24:32.793396 systemd-networkd[1117]: eth0: Gained IPv6LL Mar 17 17:24:32.898176 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:24:33.139484 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:24:33.139484 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:24:33.146426 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 17:24:33.620475 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:24:33.757809 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:24:33.761725 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:24:33.761725 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:24:33.761725 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:24:33.761725 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:24:33.761725 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:24:33.761725 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:24:33.761725 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:24:33.761725 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:24:33.761725 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:24:33.761725 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:24:33.761725 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:24:33.761725 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:24:33.761725 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:24:33.761725 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 17 17:24:34.064839 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:24:34.380616 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:24:34.380616 ignition[1306]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 17:24:34.386837 ignition[1306]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:24:34.386837 ignition[1306]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:24:34.386837 ignition[1306]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 17:24:34.386837 ignition[1306]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:24:34.386837 ignition[1306]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:24:34.386837 ignition[1306]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:24:34.386837 ignition[1306]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:24:34.386837 ignition[1306]: INFO : files: files passed Mar 17 17:24:34.386837 ignition[1306]: INFO : Ignition finished successfully Mar 17 17:24:34.410831 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:24:34.418602 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:24:34.434057 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:24:34.444679 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:24:34.445721 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:24:34.461601 initrd-setup-root-after-ignition[1335]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:24:34.465956 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:24:34.469350 initrd-setup-root-after-ignition[1335]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:24:34.475285 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:24:34.478528 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:24:34.492556 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:24:34.537409 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:24:34.537615 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:24:34.541957 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:24:34.544500 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:24:34.546661 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:24:34.559596 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:24:34.591268 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:24:34.599554 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:24:34.631346 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:24:34.636313 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:24:34.638898 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:24:34.645941 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:24:34.646373 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:24:34.653112 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:24:34.655439 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:24:34.660430 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:24:34.662737 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:24:34.669095 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:24:34.671612 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:24:34.677723 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:24:34.680580 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:24:34.686632 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:24:34.689965 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:24:34.694398 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:24:34.694627 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:24:34.697294 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:24:34.705912 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:24:34.708690 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:24:34.712289 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:24:34.715084 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:24:34.715428 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:24:34.724521 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:24:34.724926 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:24:34.732082 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:24:34.732484 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:24:34.748658 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:24:34.755876 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:24:34.759487 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:24:34.759778 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:24:34.763027 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:24:34.763339 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:24:34.787868 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:24:34.791317 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:24:34.803939 ignition[1359]: INFO : Ignition 2.20.0 Mar 17 17:24:34.805891 ignition[1359]: INFO : Stage: umount Mar 17 17:24:34.807855 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:24:34.807855 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:24:34.807855 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:24:34.816112 ignition[1359]: INFO : PUT result: OK Mar 17 17:24:34.823067 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:24:34.828276 ignition[1359]: INFO : umount: umount passed Mar 17 17:24:34.828276 ignition[1359]: INFO : Ignition finished successfully Mar 17 17:24:34.833971 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:24:34.835903 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:24:34.839752 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:24:34.839854 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:24:34.844000 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:24:34.844098 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:24:34.846127 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:24:34.846322 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:24:34.850679 systemd[1]: Stopped target network.target - Network. Mar 17 17:24:34.850885 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:24:34.850972 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:24:34.851461 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:24:34.851778 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:24:34.861961 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:24:34.864771 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:24:34.866459 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:24:34.868320 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:24:34.868397 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:24:34.870299 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:24:34.870368 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:24:34.872385 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:24:34.872471 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:24:34.882297 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:24:34.882391 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:24:34.885637 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:24:34.889085 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:24:34.895276 systemd-networkd[1117]: eth0: DHCPv6 lease lost Mar 17 17:24:34.917572 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:24:34.918472 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:24:34.924836 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:24:34.925017 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:24:34.930336 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:24:34.930561 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:24:34.937485 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:24:34.939451 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:24:34.946979 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:24:34.947074 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:24:34.960798 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:24:34.963224 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:24:34.963372 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:24:34.965781 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:24:34.965876 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:24:34.968001 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:24:34.968089 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:24:34.971184 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:24:34.971700 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:24:34.978846 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:24:35.022539 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:24:35.022863 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:24:35.027545 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:24:35.027653 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:24:35.032084 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:24:35.032173 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:24:35.043174 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:24:35.043320 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:24:35.047242 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:24:35.047347 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:24:35.049516 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:24:35.049603 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:24:35.076665 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:24:35.079120 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:24:35.079265 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:24:35.081820 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:24:35.081905 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:24:35.091092 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:24:35.091195 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:24:35.093637 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:24:35.093725 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:35.099600 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:24:35.102295 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:24:35.113104 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:24:35.113451 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:24:35.146787 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:24:35.160564 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:24:35.187116 systemd[1]: Switching root. Mar 17 17:24:35.249586 systemd-journald[252]: Journal stopped Mar 17 17:24:37.976082 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Mar 17 17:24:37.976260 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:24:37.981328 kernel: SELinux: policy capability open_perms=1 Mar 17 17:24:37.981376 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:24:37.981408 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:24:37.981440 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:24:37.981472 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:24:37.981502 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:24:37.981550 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:24:37.981581 kernel: audit: type=1403 audit(1742232276.003:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:24:37.981623 systemd[1]: Successfully loaded SELinux policy in 72.953ms. Mar 17 17:24:37.981670 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.105ms. Mar 17 17:24:37.981710 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:24:37.981742 systemd[1]: Detected virtualization amazon. Mar 17 17:24:37.981774 systemd[1]: Detected architecture arm64. Mar 17 17:24:37.981802 systemd[1]: Detected first boot. Mar 17 17:24:37.981834 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:24:37.981871 zram_generator::config[1402]: No configuration found. Mar 17 17:24:37.981915 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:24:37.981948 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:24:37.981979 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:24:37.982011 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:24:37.982045 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:24:37.982077 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:24:37.982109 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:24:37.982143 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:24:37.982174 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:24:37.988546 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:24:37.988619 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:24:37.988655 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:24:37.988687 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:24:37.988717 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:24:37.988747 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:24:37.988779 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:24:37.988817 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:24:37.988855 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:24:37.988888 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:24:37.988919 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:24:37.988949 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:24:37.988977 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:24:37.989009 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:24:37.989042 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:24:37.989071 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:24:37.989102 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:24:37.989131 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:24:37.989161 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:24:37.989192 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:24:37.989956 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:24:37.989994 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:24:37.990029 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:24:37.990063 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:24:37.990100 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:24:37.990131 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:24:37.990172 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:24:37.992570 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:24:37.992634 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:24:37.992672 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:24:37.992703 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:24:37.993951 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:24:37.994334 systemd[1]: Reached target machines.target - Containers. Mar 17 17:24:37.994367 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:24:37.994402 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:24:37.994432 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:24:37.994461 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:24:37.994493 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:24:37.994523 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:24:37.994552 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:24:37.994581 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:24:37.994616 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:24:37.994650 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:24:37.994685 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:24:37.994713 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:24:37.994742 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:24:37.994773 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:24:37.994802 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:24:37.994830 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:24:37.994860 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:24:37.994895 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:24:37.994925 kernel: loop: module loaded Mar 17 17:24:37.994958 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:24:37.994988 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:24:37.995018 systemd[1]: Stopped verity-setup.service. Mar 17 17:24:37.995048 kernel: ACPI: bus type drm_connector registered Mar 17 17:24:37.995078 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:24:37.995108 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:24:37.995138 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:24:37.995172 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:24:37.995200 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:24:37.998250 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:24:37.998285 kernel: fuse: init (API version 7.39) Mar 17 17:24:37.998323 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:24:37.998353 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:24:37.998385 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:24:37.998414 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:24:37.998443 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:24:37.998472 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:24:37.998501 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:24:37.998530 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:24:37.998558 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:24:37.998592 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:24:37.998623 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:24:37.998656 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:24:37.998688 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:24:37.998721 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:24:37.998751 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:24:37.998785 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:24:37.998815 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:24:38.000892 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:24:38.000947 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:24:38.000979 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:24:38.001009 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:24:38.001043 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:24:38.001076 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:24:38.001152 systemd-journald[1484]: Collecting audit messages is disabled. Mar 17 17:24:38.004281 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:24:38.004348 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:24:38.004379 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:24:38.004410 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:24:38.004441 systemd-journald[1484]: Journal started Mar 17 17:24:38.004511 systemd-journald[1484]: Runtime Journal (/run/log/journal/ec2033caf81b6ddb61217a75af514ca5) is 8.0M, max 75.3M, 67.3M free. Mar 17 17:24:38.013630 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:24:38.013718 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:24:37.264619 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:24:37.324018 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 17 17:24:37.324837 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:24:38.033316 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:24:38.061910 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:24:38.085156 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:24:38.094089 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:24:38.100287 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:24:38.104345 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:24:38.108950 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:24:38.111592 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:24:38.115632 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:24:38.125819 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:24:38.132458 kernel: loop0: detected capacity change from 0 to 53784 Mar 17 17:24:38.187431 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:24:38.202242 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:24:38.211933 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:24:38.227572 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:24:38.243891 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Mar 17 17:24:38.243924 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Mar 17 17:24:38.244526 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:24:38.257348 kernel: loop1: detected capacity change from 0 to 113536 Mar 17 17:24:38.260574 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:24:38.272435 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:24:38.283653 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:24:38.315819 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:24:38.324151 systemd-journald[1484]: Time spent on flushing to /var/log/journal/ec2033caf81b6ddb61217a75af514ca5 is 49.595ms for 925 entries. Mar 17 17:24:38.324151 systemd-journald[1484]: System Journal (/var/log/journal/ec2033caf81b6ddb61217a75af514ca5) is 8.0M, max 195.6M, 187.6M free. Mar 17 17:24:38.396720 systemd-journald[1484]: Received client request to flush runtime journal. Mar 17 17:24:38.396817 kernel: loop2: detected capacity change from 0 to 116808 Mar 17 17:24:38.328803 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:24:38.343687 udevadm[1545]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:24:38.402304 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:24:38.426809 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:24:38.443593 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:24:38.490132 systemd-tmpfiles[1554]: ACLs are not supported, ignoring. Mar 17 17:24:38.490728 systemd-tmpfiles[1554]: ACLs are not supported, ignoring. Mar 17 17:24:38.500705 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:24:38.508262 kernel: loop3: detected capacity change from 0 to 194096 Mar 17 17:24:38.620983 kernel: loop4: detected capacity change from 0 to 53784 Mar 17 17:24:38.654256 kernel: loop5: detected capacity change from 0 to 113536 Mar 17 17:24:38.686271 kernel: loop6: detected capacity change from 0 to 116808 Mar 17 17:24:38.703695 kernel: loop7: detected capacity change from 0 to 194096 Mar 17 17:24:38.734859 (sd-merge)[1559]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 17 17:24:38.735913 (sd-merge)[1559]: Merged extensions into '/usr'. Mar 17 17:24:38.749271 systemd[1]: Reloading requested from client PID 1513 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:24:38.749301 systemd[1]: Reloading... Mar 17 17:24:38.915547 zram_generator::config[1586]: No configuration found. Mar 17 17:24:39.305397 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:24:39.413053 systemd[1]: Reloading finished in 662 ms. Mar 17 17:24:39.454268 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:24:39.458476 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:24:39.472540 systemd[1]: Starting ensure-sysext.service... Mar 17 17:24:39.483616 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:24:39.489412 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:24:39.520461 systemd[1]: Reloading requested from client PID 1638 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:24:39.520486 systemd[1]: Reloading... Mar 17 17:24:39.535705 systemd-tmpfiles[1639]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:24:39.537474 systemd-tmpfiles[1639]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:24:39.544586 systemd-tmpfiles[1639]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:24:39.545133 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Mar 17 17:24:39.546412 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Mar 17 17:24:39.555145 systemd-tmpfiles[1639]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:24:39.555170 systemd-tmpfiles[1639]: Skipping /boot Mar 17 17:24:39.594156 systemd-tmpfiles[1639]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:24:39.598578 systemd-tmpfiles[1639]: Skipping /boot Mar 17 17:24:39.642199 systemd-udevd[1640]: Using default interface naming scheme 'v255'. Mar 17 17:24:39.728252 zram_generator::config[1674]: No configuration found. Mar 17 17:24:39.895082 (udev-worker)[1686]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:24:39.922597 ldconfig[1506]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:24:40.093241 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1695) Mar 17 17:24:40.165549 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:24:40.316141 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:24:40.316316 systemd[1]: Reloading finished in 795 ms. Mar 17 17:24:40.344886 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:24:40.349298 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:24:40.366658 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:24:40.433377 systemd[1]: Finished ensure-sysext.service. Mar 17 17:24:40.459287 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:24:40.475551 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 17 17:24:40.483534 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:24:40.496853 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:24:40.501489 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:24:40.508639 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:24:40.512948 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:24:40.519583 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:24:40.525667 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:24:40.530906 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:24:40.533177 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:24:40.539572 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:24:40.548574 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:24:40.558553 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:24:40.567576 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:24:40.569626 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:24:40.576671 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:24:40.586779 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:24:40.592664 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:24:40.593688 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:24:40.600825 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:24:40.601129 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:24:40.612149 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:24:40.646230 lvm[1839]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:24:40.682722 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:24:40.694458 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:24:40.701267 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:24:40.703725 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:24:40.711329 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:24:40.711649 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:24:40.729069 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:24:40.748089 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:24:40.767009 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:24:40.780583 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:24:40.782681 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:24:40.785910 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:24:40.798449 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:24:40.832356 lvm[1878]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:24:40.840307 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:24:40.851413 augenrules[1884]: No rules Mar 17 17:24:40.854273 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:24:40.857934 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:24:40.858813 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:24:40.862478 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:24:40.896884 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:24:40.904492 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:24:40.909312 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:41.023406 systemd-resolved[1853]: Positive Trust Anchors: Mar 17 17:24:41.023472 systemd-resolved[1853]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:24:41.023536 systemd-resolved[1853]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:24:41.031467 systemd-resolved[1853]: Defaulting to hostname 'linux'. Mar 17 17:24:41.031894 systemd-networkd[1850]: lo: Link UP Mar 17 17:24:41.031902 systemd-networkd[1850]: lo: Gained carrier Mar 17 17:24:41.035578 systemd-networkd[1850]: Enumeration completed Mar 17 17:24:41.036257 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:24:41.038587 systemd-networkd[1850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:24:41.038595 systemd-networkd[1850]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:24:41.038639 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:24:41.040902 systemd[1]: Reached target network.target - Network. Mar 17 17:24:41.042922 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:24:41.045169 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:24:41.047726 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:24:41.050102 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:24:41.052760 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:24:41.055049 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:24:41.057687 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:24:41.057998 systemd-networkd[1850]: eth0: Link UP Mar 17 17:24:41.058325 systemd-networkd[1850]: eth0: Gained carrier Mar 17 17:24:41.058360 systemd-networkd[1850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:24:41.061278 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:24:41.061329 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:24:41.063054 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:24:41.067425 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:24:41.072067 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:24:41.073965 systemd-networkd[1850]: eth0: DHCPv4 address 172.31.30.87/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 17:24:41.083140 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:24:41.087970 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:24:41.091102 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:24:41.095633 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:24:41.099799 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:24:41.102848 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:24:41.103460 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:24:41.111690 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:24:41.119822 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:24:41.129706 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:24:41.143583 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:24:41.150542 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:24:41.152619 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:24:41.156197 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:24:41.164575 systemd[1]: Started ntpd.service - Network Time Service. Mar 17 17:24:41.170941 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:24:41.177467 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 17 17:24:41.189582 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:24:41.198566 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:24:41.203046 jq[1908]: false Mar 17 17:24:41.210773 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:24:41.213177 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:24:41.216122 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:24:41.220656 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:24:41.229307 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:24:41.239945 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:24:41.242505 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:24:41.300973 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:24:41.302388 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:24:41.338859 (ntainerd)[1939]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:24:41.366572 jq[1918]: true Mar 17 17:24:41.381241 extend-filesystems[1909]: Found loop4 Mar 17 17:24:41.381241 extend-filesystems[1909]: Found loop5 Mar 17 17:24:41.381241 extend-filesystems[1909]: Found loop6 Mar 17 17:24:41.381241 extend-filesystems[1909]: Found loop7 Mar 17 17:24:41.381241 extend-filesystems[1909]: Found nvme0n1 Mar 17 17:24:41.381241 extend-filesystems[1909]: Found nvme0n1p1 Mar 17 17:24:41.381241 extend-filesystems[1909]: Found nvme0n1p2 Mar 17 17:24:41.381241 extend-filesystems[1909]: Found nvme0n1p3 Mar 17 17:24:41.381241 extend-filesystems[1909]: Found usr Mar 17 17:24:41.381241 extend-filesystems[1909]: Found nvme0n1p4 Mar 17 17:24:41.381241 extend-filesystems[1909]: Found nvme0n1p6 Mar 17 17:24:41.381241 extend-filesystems[1909]: Found nvme0n1p7 Mar 17 17:24:41.381241 extend-filesystems[1909]: Found nvme0n1p9 Mar 17 17:24:41.381241 extend-filesystems[1909]: Checking size of /dev/nvme0n1p9 Mar 17 17:24:41.447464 tar[1926]: linux-arm64/helm Mar 17 17:24:41.412318 dbus-daemon[1907]: [system] SELinux support is enabled Mar 17 17:24:41.411313 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:24:41.431038 dbus-daemon[1907]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1850 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 17 17:24:41.436503 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:24:41.455432 update_engine[1917]: I20250317 17:24:41.452955 1917 main.cc:92] Flatcar Update Engine starting Mar 17 17:24:41.462443 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:24:41.470668 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 17 17:24:41.474128 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:24:41.480304 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:24:41.482851 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:24:41.482902 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:24:41.495437 jq[1947]: true Mar 17 17:24:41.494516 dbus-daemon[1907]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 17:24:41.508243 update_engine[1917]: I20250317 17:24:41.501503 1917 update_check_scheduler.cc:74] Next update check in 11m49s Mar 17 17:24:41.515149 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:24:41.548687 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 17 17:24:41.550116 extend-filesystems[1909]: Resized partition /dev/nvme0n1p9 Mar 17 17:24:41.565868 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:24:41.573913 extend-filesystems[1960]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:24:41.591837 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Mar 17 17:24:41.633289 ntpd[1911]: ntpd 4.2.8p17@1.4004-o Mon Mar 17 15:34:53 UTC 2025 (1): Starting Mar 17 17:24:41.637819 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: ntpd 4.2.8p17@1.4004-o Mon Mar 17 15:34:53 UTC 2025 (1): Starting Mar 17 17:24:41.637819 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 17 17:24:41.637819 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: ---------------------------------------------------- Mar 17 17:24:41.637819 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: ntp-4 is maintained by Network Time Foundation, Mar 17 17:24:41.637819 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 17 17:24:41.637819 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: corporation. Support and training for ntp-4 are Mar 17 17:24:41.637819 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: available at https://www.nwtime.org/support Mar 17 17:24:41.637819 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: ---------------------------------------------------- Mar 17 17:24:41.637819 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: proto: precision = 0.096 usec (-23) Mar 17 17:24:41.633352 ntpd[1911]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 17 17:24:41.638701 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: basedate set to 2025-03-05 Mar 17 17:24:41.638701 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: gps base set to 2025-03-09 (week 2357) Mar 17 17:24:41.633372 ntpd[1911]: ---------------------------------------------------- Mar 17 17:24:41.633392 ntpd[1911]: ntp-4 is maintained by Network Time Foundation, Mar 17 17:24:41.633410 ntpd[1911]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 17 17:24:41.633428 ntpd[1911]: corporation. Support and training for ntp-4 are Mar 17 17:24:41.633446 ntpd[1911]: available at https://www.nwtime.org/support Mar 17 17:24:41.633465 ntpd[1911]: ---------------------------------------------------- Mar 17 17:24:41.637526 ntpd[1911]: proto: precision = 0.096 usec (-23) Mar 17 17:24:41.638378 ntpd[1911]: basedate set to 2025-03-05 Mar 17 17:24:41.638404 ntpd[1911]: gps base set to 2025-03-09 (week 2357) Mar 17 17:24:41.644988 ntpd[1911]: Listen and drop on 0 v6wildcard [::]:123 Mar 17 17:24:41.648011 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: Listen and drop on 0 v6wildcard [::]:123 Mar 17 17:24:41.648011 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 17 17:24:41.648011 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: Listen normally on 2 lo 127.0.0.1:123 Mar 17 17:24:41.648011 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: Listen normally on 3 eth0 172.31.30.87:123 Mar 17 17:24:41.648011 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: Listen normally on 4 lo [::1]:123 Mar 17 17:24:41.648011 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: bind(21) AF_INET6 fe80::499:b4ff:fe08:f829%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:24:41.648011 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: unable to create socket on eth0 (5) for fe80::499:b4ff:fe08:f829%2#123 Mar 17 17:24:41.648011 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: failed to init interface for address fe80::499:b4ff:fe08:f829%2 Mar 17 17:24:41.648011 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: Listening on routing socket on fd #21 for interface updates Mar 17 17:24:41.645874 ntpd[1911]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 17 17:24:41.646155 ntpd[1911]: Listen normally on 2 lo 127.0.0.1:123 Mar 17 17:24:41.646392 ntpd[1911]: Listen normally on 3 eth0 172.31.30.87:123 Mar 17 17:24:41.646482 ntpd[1911]: Listen normally on 4 lo [::1]:123 Mar 17 17:24:41.646560 ntpd[1911]: bind(21) AF_INET6 fe80::499:b4ff:fe08:f829%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:24:41.646601 ntpd[1911]: unable to create socket on eth0 (5) for fe80::499:b4ff:fe08:f829%2#123 Mar 17 17:24:41.646967 ntpd[1911]: failed to init interface for address fe80::499:b4ff:fe08:f829%2 Mar 17 17:24:41.647030 ntpd[1911]: Listening on routing socket on fd #21 for interface updates Mar 17 17:24:41.655727 ntpd[1911]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:24:41.669313 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:24:41.669313 ntpd[1911]: 17 Mar 17:24:41 ntpd[1911]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:24:41.655793 ntpd[1911]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:24:41.672932 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Mar 17 17:24:41.687565 extend-filesystems[1960]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 17 17:24:41.687565 extend-filesystems[1960]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:24:41.687565 extend-filesystems[1960]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Mar 17 17:24:41.706863 extend-filesystems[1909]: Resized filesystem in /dev/nvme0n1p9 Mar 17 17:24:41.700706 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:24:41.702722 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:24:41.799236 bash[1988]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:24:41.794871 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:24:41.826246 coreos-metadata[1906]: Mar 17 17:24:41.820 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 17:24:41.846049 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1695) Mar 17 17:24:41.844023 systemd[1]: Starting sshkeys.service... Mar 17 17:24:41.846241 coreos-metadata[1906]: Mar 17 17:24:41.826 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 17 17:24:41.846241 coreos-metadata[1906]: Mar 17 17:24:41.835 INFO Fetch successful Mar 17 17:24:41.846241 coreos-metadata[1906]: Mar 17 17:24:41.835 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 17 17:24:41.846241 coreos-metadata[1906]: Mar 17 17:24:41.836 INFO Fetch successful Mar 17 17:24:41.846241 coreos-metadata[1906]: Mar 17 17:24:41.836 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 17 17:24:41.846241 coreos-metadata[1906]: Mar 17 17:24:41.837 INFO Fetch successful Mar 17 17:24:41.846241 coreos-metadata[1906]: Mar 17 17:24:41.837 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 17 17:24:41.846241 coreos-metadata[1906]: Mar 17 17:24:41.838 INFO Fetch successful Mar 17 17:24:41.846241 coreos-metadata[1906]: Mar 17 17:24:41.838 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 17 17:24:41.846241 coreos-metadata[1906]: Mar 17 17:24:41.841 INFO Fetch failed with 404: resource not found Mar 17 17:24:41.846241 coreos-metadata[1906]: Mar 17 17:24:41.841 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 17 17:24:41.850358 coreos-metadata[1906]: Mar 17 17:24:41.847 INFO Fetch successful Mar 17 17:24:41.850358 coreos-metadata[1906]: Mar 17 17:24:41.847 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 17 17:24:41.855045 coreos-metadata[1906]: Mar 17 17:24:41.854 INFO Fetch successful Mar 17 17:24:41.855045 coreos-metadata[1906]: Mar 17 17:24:41.854 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 17 17:24:41.857184 coreos-metadata[1906]: Mar 17 17:24:41.855 INFO Fetch successful Mar 17 17:24:41.857184 coreos-metadata[1906]: Mar 17 17:24:41.855 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 17 17:24:41.857019 systemd-logind[1916]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 17:24:41.857053 systemd-logind[1916]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 17 17:24:41.929591 coreos-metadata[1906]: Mar 17 17:24:41.861 INFO Fetch successful Mar 17 17:24:41.929591 coreos-metadata[1906]: Mar 17 17:24:41.861 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 17 17:24:41.929591 coreos-metadata[1906]: Mar 17 17:24:41.862 INFO Fetch successful Mar 17 17:24:41.859629 systemd-logind[1916]: New seat seat0. Mar 17 17:24:41.930096 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:24:41.966553 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:24:41.975967 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:24:42.042122 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:24:42.045247 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:24:42.136894 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:24:42.191918 containerd[1939]: time="2025-03-17T17:24:42.191755137Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:24:42.233782 dbus-daemon[1907]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 17 17:24:42.234042 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 17 17:24:42.240323 dbus-daemon[1907]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1956 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 17 17:24:42.256930 systemd[1]: Starting polkit.service - Authorization Manager... Mar 17 17:24:42.268681 locksmithd[1958]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:24:42.333414 coreos-metadata[1997]: Mar 17 17:24:42.333 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 17:24:42.335858 coreos-metadata[1997]: Mar 17 17:24:42.334 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 17 17:24:42.337174 polkitd[2053]: Started polkitd version 121 Mar 17 17:24:42.340457 coreos-metadata[1997]: Mar 17 17:24:42.339 INFO Fetch successful Mar 17 17:24:42.340457 coreos-metadata[1997]: Mar 17 17:24:42.339 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 17:24:42.341039 coreos-metadata[1997]: Mar 17 17:24:42.340 INFO Fetch successful Mar 17 17:24:42.345431 unknown[1997]: wrote ssh authorized keys file for user: core Mar 17 17:24:42.383352 polkitd[2053]: Loading rules from directory /etc/polkit-1/rules.d Mar 17 17:24:42.383477 polkitd[2053]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 17 17:24:42.390888 polkitd[2053]: Finished loading, compiling and executing 2 rules Mar 17 17:24:42.398557 dbus-daemon[1907]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 17 17:24:42.400153 systemd[1]: Started polkit.service - Authorization Manager. Mar 17 17:24:42.404420 polkitd[2053]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 17 17:24:42.430107 containerd[1939]: time="2025-03-17T17:24:42.430039523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:24:42.439240 containerd[1939]: time="2025-03-17T17:24:42.437494235Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:24:42.439240 containerd[1939]: time="2025-03-17T17:24:42.437564339Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:24:42.439240 containerd[1939]: time="2025-03-17T17:24:42.437598791Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:24:42.439240 containerd[1939]: time="2025-03-17T17:24:42.437914619Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:24:42.439240 containerd[1939]: time="2025-03-17T17:24:42.437949815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:24:42.439240 containerd[1939]: time="2025-03-17T17:24:42.438081839Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:24:42.439240 containerd[1939]: time="2025-03-17T17:24:42.438112595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:24:42.439240 containerd[1939]: time="2025-03-17T17:24:42.438446051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:24:42.439240 containerd[1939]: time="2025-03-17T17:24:42.438480815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:24:42.439240 containerd[1939]: time="2025-03-17T17:24:42.438513659Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:24:42.439240 containerd[1939]: time="2025-03-17T17:24:42.438537875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:24:42.439790 containerd[1939]: time="2025-03-17T17:24:42.438726947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:24:42.439790 containerd[1939]: time="2025-03-17T17:24:42.439129031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:24:42.447561 containerd[1939]: time="2025-03-17T17:24:42.447497555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:24:42.448982 containerd[1939]: time="2025-03-17T17:24:42.448392563Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:24:42.448982 containerd[1939]: time="2025-03-17T17:24:42.448649495Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:24:42.448982 containerd[1939]: time="2025-03-17T17:24:42.448752311Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:24:42.453461 update-ssh-keys[2084]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:24:42.456962 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:24:42.462764 containerd[1939]: time="2025-03-17T17:24:42.460876763Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:24:42.462764 containerd[1939]: time="2025-03-17T17:24:42.461048675Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:24:42.462764 containerd[1939]: time="2025-03-17T17:24:42.461353619Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:24:42.462764 containerd[1939]: time="2025-03-17T17:24:42.461439095Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:24:42.462764 containerd[1939]: time="2025-03-17T17:24:42.461502707Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:24:42.465789 containerd[1939]: time="2025-03-17T17:24:42.465481019Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:24:42.467258 containerd[1939]: time="2025-03-17T17:24:42.465922931Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:24:42.467258 containerd[1939]: time="2025-03-17T17:24:42.466150835Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:24:42.467258 containerd[1939]: time="2025-03-17T17:24:42.466184303Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:24:42.470325 systemd[1]: Finished sshkeys.service. Mar 17 17:24:42.473379 containerd[1939]: time="2025-03-17T17:24:42.471505787Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:24:42.473379 containerd[1939]: time="2025-03-17T17:24:42.471655031Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:24:42.473379 containerd[1939]: time="2025-03-17T17:24:42.471689879Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:24:42.473379 containerd[1939]: time="2025-03-17T17:24:42.471725447Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:24:42.473379 containerd[1939]: time="2025-03-17T17:24:42.471758807Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:24:42.473379 containerd[1939]: time="2025-03-17T17:24:42.471799007Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:24:42.473379 containerd[1939]: time="2025-03-17T17:24:42.471831047Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:24:42.473379 containerd[1939]: time="2025-03-17T17:24:42.471860315Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:24:42.473379 containerd[1939]: time="2025-03-17T17:24:42.471886883Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:24:42.473379 containerd[1939]: time="2025-03-17T17:24:42.471930371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:24:42.473379 containerd[1939]: time="2025-03-17T17:24:42.471962039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:24:42.473379 containerd[1939]: time="2025-03-17T17:24:42.471991799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:24:42.473379 containerd[1939]: time="2025-03-17T17:24:42.472025567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:24:42.473379 containerd[1939]: time="2025-03-17T17:24:42.472053827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:24:42.477733 containerd[1939]: time="2025-03-17T17:24:42.472084187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:24:42.477733 containerd[1939]: time="2025-03-17T17:24:42.472111151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:24:42.477733 containerd[1939]: time="2025-03-17T17:24:42.472147055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:24:42.477733 containerd[1939]: time="2025-03-17T17:24:42.472178183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:24:42.477733 containerd[1939]: time="2025-03-17T17:24:42.476514419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:24:42.477733 containerd[1939]: time="2025-03-17T17:24:42.476574011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:24:42.477733 containerd[1939]: time="2025-03-17T17:24:42.476605391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:24:42.477733 containerd[1939]: time="2025-03-17T17:24:42.476638199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:24:42.477733 containerd[1939]: time="2025-03-17T17:24:42.476679515Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:24:42.477733 containerd[1939]: time="2025-03-17T17:24:42.476729771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:24:42.477733 containerd[1939]: time="2025-03-17T17:24:42.476766155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:24:42.477733 containerd[1939]: time="2025-03-17T17:24:42.476793359Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:24:42.477733 containerd[1939]: time="2025-03-17T17:24:42.476939471Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:24:42.477733 containerd[1939]: time="2025-03-17T17:24:42.476979239Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:24:42.480176 containerd[1939]: time="2025-03-17T17:24:42.477003539Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:24:42.480176 containerd[1939]: time="2025-03-17T17:24:42.477037511Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:24:42.480176 containerd[1939]: time="2025-03-17T17:24:42.477062063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:24:42.480176 containerd[1939]: time="2025-03-17T17:24:42.477089747Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:24:42.480176 containerd[1939]: time="2025-03-17T17:24:42.477113615Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:24:42.480176 containerd[1939]: time="2025-03-17T17:24:42.477138395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:24:42.480791 containerd[1939]: time="2025-03-17T17:24:42.477649499Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:24:42.480791 containerd[1939]: time="2025-03-17T17:24:42.477743267Z" level=info msg="Connect containerd service" Mar 17 17:24:42.480791 containerd[1939]: time="2025-03-17T17:24:42.477810431Z" level=info msg="using legacy CRI server" Mar 17 17:24:42.480791 containerd[1939]: time="2025-03-17T17:24:42.477827999Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:24:42.480791 containerd[1939]: time="2025-03-17T17:24:42.478073651Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:24:42.486511 containerd[1939]: time="2025-03-17T17:24:42.485423291Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:24:42.486511 containerd[1939]: time="2025-03-17T17:24:42.485940251Z" level=info msg="Start subscribing containerd event" Mar 17 17:24:42.486511 containerd[1939]: time="2025-03-17T17:24:42.486018371Z" level=info msg="Start recovering state" Mar 17 17:24:42.486511 containerd[1939]: time="2025-03-17T17:24:42.486142283Z" level=info msg="Start event monitor" Mar 17 17:24:42.486511 containerd[1939]: time="2025-03-17T17:24:42.486165875Z" level=info msg="Start snapshots syncer" Mar 17 17:24:42.486511 containerd[1939]: time="2025-03-17T17:24:42.486186863Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:24:42.486511 containerd[1939]: time="2025-03-17T17:24:42.486245807Z" level=info msg="Start streaming server" Mar 17 17:24:42.490839 containerd[1939]: time="2025-03-17T17:24:42.490318211Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:24:42.490839 containerd[1939]: time="2025-03-17T17:24:42.490455455Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:24:42.490873 systemd-hostnamed[1956]: Hostname set to (transient) Mar 17 17:24:42.491040 systemd-resolved[1853]: System hostname changed to 'ip-172-31-30-87'. Mar 17 17:24:42.493923 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:24:42.500894 containerd[1939]: time="2025-03-17T17:24:42.494307191Z" level=info msg="containerd successfully booted in 0.306933s" Mar 17 17:24:42.635598 ntpd[1911]: bind(24) AF_INET6 fe80::499:b4ff:fe08:f829%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:24:42.635667 ntpd[1911]: unable to create socket on eth0 (6) for fe80::499:b4ff:fe08:f829%2#123 Mar 17 17:24:42.636091 ntpd[1911]: 17 Mar 17:24:42 ntpd[1911]: bind(24) AF_INET6 fe80::499:b4ff:fe08:f829%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:24:42.636091 ntpd[1911]: 17 Mar 17:24:42 ntpd[1911]: unable to create socket on eth0 (6) for fe80::499:b4ff:fe08:f829%2#123 Mar 17 17:24:42.636091 ntpd[1911]: 17 Mar 17:24:42 ntpd[1911]: failed to init interface for address fe80::499:b4ff:fe08:f829%2 Mar 17 17:24:42.635696 ntpd[1911]: failed to init interface for address fe80::499:b4ff:fe08:f829%2 Mar 17 17:24:42.716304 systemd-networkd[1850]: eth0: Gained IPv6LL Mar 17 17:24:42.730278 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:24:42.733842 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:24:42.748550 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 17 17:24:42.760260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:24:42.766766 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:24:42.883433 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:24:42.894955 amazon-ssm-agent[2113]: Initializing new seelog logger Mar 17 17:24:42.897226 amazon-ssm-agent[2113]: New Seelog Logger Creation Complete Mar 17 17:24:42.897226 amazon-ssm-agent[2113]: 2025/03/17 17:24:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:24:42.897226 amazon-ssm-agent[2113]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:24:42.897226 amazon-ssm-agent[2113]: 2025/03/17 17:24:42 processing appconfig overrides Mar 17 17:24:42.897948 amazon-ssm-agent[2113]: 2025/03/17 17:24:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:24:42.898043 amazon-ssm-agent[2113]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:24:42.898278 amazon-ssm-agent[2113]: 2025/03/17 17:24:42 processing appconfig overrides Mar 17 17:24:42.898674 amazon-ssm-agent[2113]: 2025/03/17 17:24:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:24:42.898758 amazon-ssm-agent[2113]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:24:42.898965 amazon-ssm-agent[2113]: 2025/03/17 17:24:42 processing appconfig overrides Mar 17 17:24:42.899851 amazon-ssm-agent[2113]: 2025-03-17 17:24:42 INFO Proxy environment variables: Mar 17 17:24:42.906455 amazon-ssm-agent[2113]: 2025/03/17 17:24:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:24:42.906455 amazon-ssm-agent[2113]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:24:42.906455 amazon-ssm-agent[2113]: 2025/03/17 17:24:42 processing appconfig overrides Mar 17 17:24:43.000304 amazon-ssm-agent[2113]: 2025-03-17 17:24:42 INFO https_proxy: Mar 17 17:24:43.101293 amazon-ssm-agent[2113]: 2025-03-17 17:24:42 INFO http_proxy: Mar 17 17:24:43.201297 amazon-ssm-agent[2113]: 2025-03-17 17:24:42 INFO no_proxy: Mar 17 17:24:43.237613 tar[1926]: linux-arm64/LICENSE Mar 17 17:24:43.238381 tar[1926]: linux-arm64/README.md Mar 17 17:24:43.275919 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:24:43.300038 amazon-ssm-agent[2113]: 2025-03-17 17:24:42 INFO Checking if agent identity type OnPrem can be assumed Mar 17 17:24:43.383673 sshd_keygen[1941]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:24:43.399884 amazon-ssm-agent[2113]: 2025-03-17 17:24:42 INFO Checking if agent identity type EC2 can be assumed Mar 17 17:24:43.453254 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:24:43.475420 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:24:43.479223 systemd[1]: Started sshd@0-172.31.30.87:22-139.178.68.195:54408.service - OpenSSH per-connection server daemon (139.178.68.195:54408). Mar 17 17:24:43.496118 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:24:43.496604 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:24:43.502966 amazon-ssm-agent[2113]: 2025-03-17 17:24:43 INFO Agent will take identity from EC2 Mar 17 17:24:43.505876 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:24:43.556739 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:24:43.571798 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:24:43.581882 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:24:43.584746 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:24:43.605777 amazon-ssm-agent[2113]: 2025-03-17 17:24:43 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:24:43.706290 amazon-ssm-agent[2113]: 2025-03-17 17:24:43 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:24:43.792171 sshd[2143]: Accepted publickey for core from 139.178.68.195 port 54408 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:24:43.795659 sshd-session[2143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:24:43.806197 amazon-ssm-agent[2113]: 2025-03-17 17:24:43 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:24:43.811695 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:24:43.821808 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:24:43.831484 systemd-logind[1916]: New session 1 of user core. Mar 17 17:24:43.861962 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:24:43.865102 amazon-ssm-agent[2113]: 2025-03-17 17:24:43 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 17 17:24:43.865102 amazon-ssm-agent[2113]: 2025-03-17 17:24:43 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 17 17:24:43.865102 amazon-ssm-agent[2113]: 2025-03-17 17:24:43 INFO [amazon-ssm-agent] Starting Core Agent Mar 17 17:24:43.865102 amazon-ssm-agent[2113]: 2025-03-17 17:24:43 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 17 17:24:43.865102 amazon-ssm-agent[2113]: 2025-03-17 17:24:43 INFO [Registrar] Starting registrar module Mar 17 17:24:43.865102 amazon-ssm-agent[2113]: 2025-03-17 17:24:43 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 17 17:24:43.865102 amazon-ssm-agent[2113]: 2025-03-17 17:24:43 INFO [EC2Identity] EC2 registration was successful. Mar 17 17:24:43.865102 amazon-ssm-agent[2113]: 2025-03-17 17:24:43 INFO [CredentialRefresher] credentialRefresher has started Mar 17 17:24:43.865102 amazon-ssm-agent[2113]: 2025-03-17 17:24:43 INFO [CredentialRefresher] Starting credentials refresher loop Mar 17 17:24:43.865102 amazon-ssm-agent[2113]: 2025-03-17 17:24:43 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 17 17:24:43.877737 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:24:43.895816 (systemd)[2154]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:24:43.905883 amazon-ssm-agent[2113]: 2025-03-17 17:24:43 INFO [CredentialRefresher] Next credential rotation will be in 30.133290459033333 minutes Mar 17 17:24:44.118259 systemd[2154]: Queued start job for default target default.target. Mar 17 17:24:44.126510 systemd[2154]: Created slice app.slice - User Application Slice. Mar 17 17:24:44.126573 systemd[2154]: Reached target paths.target - Paths. Mar 17 17:24:44.126606 systemd[2154]: Reached target timers.target - Timers. Mar 17 17:24:44.129239 systemd[2154]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:24:44.160599 systemd[2154]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:24:44.160848 systemd[2154]: Reached target sockets.target - Sockets. Mar 17 17:24:44.160896 systemd[2154]: Reached target basic.target - Basic System. Mar 17 17:24:44.160990 systemd[2154]: Reached target default.target - Main User Target. Mar 17 17:24:44.161054 systemd[2154]: Startup finished in 251ms. Mar 17 17:24:44.161231 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:24:44.171529 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:24:44.332820 systemd[1]: Started sshd@1-172.31.30.87:22-139.178.68.195:54414.service - OpenSSH per-connection server daemon (139.178.68.195:54414). Mar 17 17:24:44.510894 sshd[2165]: Accepted publickey for core from 139.178.68.195 port 54414 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:24:44.513366 sshd-session[2165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:24:44.521963 systemd-logind[1916]: New session 2 of user core. Mar 17 17:24:44.528489 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:24:44.657913 sshd[2167]: Connection closed by 139.178.68.195 port 54414 Mar 17 17:24:44.658529 sshd-session[2165]: pam_unix(sshd:session): session closed for user core Mar 17 17:24:44.663164 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:24:44.665068 systemd[1]: sshd@1-172.31.30.87:22-139.178.68.195:54414.service: Deactivated successfully. Mar 17 17:24:44.671424 systemd-logind[1916]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:24:44.673666 systemd-logind[1916]: Removed session 2. Mar 17 17:24:44.692460 systemd[1]: Started sshd@2-172.31.30.87:22-139.178.68.195:54420.service - OpenSSH per-connection server daemon (139.178.68.195:54420). Mar 17 17:24:44.893046 amazon-ssm-agent[2113]: 2025-03-17 17:24:44 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 17 17:24:44.894079 sshd[2172]: Accepted publickey for core from 139.178.68.195 port 54420 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:24:44.898364 sshd-session[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:24:44.912179 systemd-logind[1916]: New session 3 of user core. Mar 17 17:24:44.920073 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:24:44.993346 amazon-ssm-agent[2113]: 2025-03-17 17:24:44 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2175) started Mar 17 17:24:45.062587 sshd[2179]: Connection closed by 139.178.68.195 port 54420 Mar 17 17:24:45.064325 sshd-session[2172]: pam_unix(sshd:session): session closed for user core Mar 17 17:24:45.071141 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:24:45.073093 systemd[1]: sshd@2-172.31.30.87:22-139.178.68.195:54420.service: Deactivated successfully. Mar 17 17:24:45.094991 amazon-ssm-agent[2113]: 2025-03-17 17:24:44 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 17 17:24:45.098375 systemd-logind[1916]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:24:45.102717 systemd-logind[1916]: Removed session 3. Mar 17 17:24:45.133541 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:24:45.136683 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:24:45.142365 systemd[1]: Startup finished in 1.092s (kernel) + 9.181s (initrd) + 9.209s (userspace) = 19.484s. Mar 17 17:24:45.150497 (kubelet)[2194]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:24:45.634596 ntpd[1911]: Listen normally on 7 eth0 [fe80::499:b4ff:fe08:f829%2]:123 Mar 17 17:24:45.635418 ntpd[1911]: 17 Mar 17:24:45 ntpd[1911]: Listen normally on 7 eth0 [fe80::499:b4ff:fe08:f829%2]:123 Mar 17 17:24:46.464264 kubelet[2194]: E0317 17:24:46.464133 2194 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:24:46.467871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:24:46.468189 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:24:46.471329 systemd[1]: kubelet.service: Consumed 1.315s CPU time. Mar 17 17:24:48.900962 systemd-resolved[1853]: Clock change detected. Flushing caches. Mar 17 17:24:55.364225 systemd[1]: Started sshd@3-172.31.30.87:22-139.178.68.195:36032.service - OpenSSH per-connection server daemon (139.178.68.195:36032). Mar 17 17:24:55.555189 sshd[2207]: Accepted publickey for core from 139.178.68.195 port 36032 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:24:55.557605 sshd-session[2207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:24:55.564990 systemd-logind[1916]: New session 4 of user core. Mar 17 17:24:55.577267 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:24:55.703747 sshd[2209]: Connection closed by 139.178.68.195 port 36032 Mar 17 17:24:55.704326 sshd-session[2207]: pam_unix(sshd:session): session closed for user core Mar 17 17:24:55.709396 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:24:55.712444 systemd[1]: sshd@3-172.31.30.87:22-139.178.68.195:36032.service: Deactivated successfully. Mar 17 17:24:55.716976 systemd-logind[1916]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:24:55.718770 systemd-logind[1916]: Removed session 4. Mar 17 17:24:55.742557 systemd[1]: Started sshd@4-172.31.30.87:22-139.178.68.195:58916.service - OpenSSH per-connection server daemon (139.178.68.195:58916). Mar 17 17:24:55.925688 sshd[2214]: Accepted publickey for core from 139.178.68.195 port 58916 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:24:55.928230 sshd-session[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:24:55.935488 systemd-logind[1916]: New session 5 of user core. Mar 17 17:24:55.944273 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:24:56.062209 sshd[2216]: Connection closed by 139.178.68.195 port 58916 Mar 17 17:24:56.063651 sshd-session[2214]: pam_unix(sshd:session): session closed for user core Mar 17 17:24:56.069085 systemd-logind[1916]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:24:56.071210 systemd[1]: sshd@4-172.31.30.87:22-139.178.68.195:58916.service: Deactivated successfully. Mar 17 17:24:56.076152 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:24:56.077624 systemd-logind[1916]: Removed session 5. Mar 17 17:24:56.102560 systemd[1]: Started sshd@5-172.31.30.87:22-139.178.68.195:58932.service - OpenSSH per-connection server daemon (139.178.68.195:58932). Mar 17 17:24:56.288184 sshd[2221]: Accepted publickey for core from 139.178.68.195 port 58932 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:24:56.290640 sshd-session[2221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:24:56.299346 systemd-logind[1916]: New session 6 of user core. Mar 17 17:24:56.310302 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:24:56.436039 sshd[2223]: Connection closed by 139.178.68.195 port 58932 Mar 17 17:24:56.436837 sshd-session[2221]: pam_unix(sshd:session): session closed for user core Mar 17 17:24:56.443694 systemd[1]: sshd@5-172.31.30.87:22-139.178.68.195:58932.service: Deactivated successfully. Mar 17 17:24:56.447665 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:24:56.448946 systemd-logind[1916]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:24:56.450993 systemd-logind[1916]: Removed session 6. Mar 17 17:24:56.474574 systemd[1]: Started sshd@6-172.31.30.87:22-139.178.68.195:58942.service - OpenSSH per-connection server daemon (139.178.68.195:58942). Mar 17 17:24:56.661350 sshd[2228]: Accepted publickey for core from 139.178.68.195 port 58942 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:24:56.663849 sshd-session[2228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:24:56.674246 systemd-logind[1916]: New session 7 of user core. Mar 17 17:24:56.677334 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:24:56.792687 sudo[2231]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:24:56.793384 sudo[2231]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:24:56.795529 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:24:56.809370 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:24:56.824886 sudo[2231]: pam_unix(sudo:session): session closed for user root Mar 17 17:24:56.850700 sshd[2230]: Connection closed by 139.178.68.195 port 58942 Mar 17 17:24:56.849767 sshd-session[2228]: pam_unix(sshd:session): session closed for user core Mar 17 17:24:56.855401 systemd[1]: sshd@6-172.31.30.87:22-139.178.68.195:58942.service: Deactivated successfully. Mar 17 17:24:56.859065 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:24:56.862899 systemd-logind[1916]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:24:56.869948 systemd-logind[1916]: Removed session 7. Mar 17 17:24:56.890731 systemd[1]: Started sshd@7-172.31.30.87:22-139.178.68.195:58954.service - OpenSSH per-connection server daemon (139.178.68.195:58954). Mar 17 17:24:57.080948 sshd[2239]: Accepted publickey for core from 139.178.68.195 port 58954 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:24:57.084622 sshd-session[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:24:57.096632 systemd-logind[1916]: New session 8 of user core. Mar 17 17:24:57.104322 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:24:57.150837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:24:57.169530 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:24:57.275966 sudo[2254]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:24:57.276757 sudo[2254]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:24:57.284097 sudo[2254]: pam_unix(sudo:session): session closed for user root Mar 17 17:24:57.295158 sudo[2252]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:24:57.296459 sudo[2252]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:24:57.326153 kubelet[2247]: E0317 17:24:57.325856 2247 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:24:57.326288 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:24:57.335415 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:24:57.335786 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:24:57.377541 augenrules[2278]: No rules Mar 17 17:24:57.380257 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:24:57.380622 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:24:57.382969 sudo[2252]: pam_unix(sudo:session): session closed for user root Mar 17 17:24:57.405541 sshd[2243]: Connection closed by 139.178.68.195 port 58954 Mar 17 17:24:57.406349 sshd-session[2239]: pam_unix(sshd:session): session closed for user core Mar 17 17:24:57.411285 systemd-logind[1916]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:24:57.411917 systemd[1]: sshd@7-172.31.30.87:22-139.178.68.195:58954.service: Deactivated successfully. Mar 17 17:24:57.414838 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:24:57.419423 systemd-logind[1916]: Removed session 8. Mar 17 17:24:57.446561 systemd[1]: Started sshd@8-172.31.30.87:22-139.178.68.195:58960.service - OpenSSH per-connection server daemon (139.178.68.195:58960). Mar 17 17:24:57.622980 sshd[2286]: Accepted publickey for core from 139.178.68.195 port 58960 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:24:57.624845 sshd-session[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:24:57.631714 systemd-logind[1916]: New session 9 of user core. Mar 17 17:24:57.643287 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:24:57.745473 sudo[2289]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:24:57.746580 sudo[2289]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:24:58.192544 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:24:58.202535 (dockerd)[2307]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:24:58.534706 dockerd[2307]: time="2025-03-17T17:24:58.534509267Z" level=info msg="Starting up" Mar 17 17:24:58.649493 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport260193780-merged.mount: Deactivated successfully. Mar 17 17:24:58.683287 dockerd[2307]: time="2025-03-17T17:24:58.683223696Z" level=info msg="Loading containers: start." Mar 17 17:24:58.939057 kernel: Initializing XFRM netlink socket Mar 17 17:24:58.972481 (udev-worker)[2329]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:24:59.063672 systemd-networkd[1850]: docker0: Link UP Mar 17 17:24:59.101490 dockerd[2307]: time="2025-03-17T17:24:59.101347042Z" level=info msg="Loading containers: done." Mar 17 17:24:59.126044 dockerd[2307]: time="2025-03-17T17:24:59.125952574Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:24:59.126276 dockerd[2307]: time="2025-03-17T17:24:59.126133510Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Mar 17 17:24:59.126336 dockerd[2307]: time="2025-03-17T17:24:59.126319762Z" level=info msg="Daemon has completed initialization" Mar 17 17:24:59.179160 dockerd[2307]: time="2025-03-17T17:24:59.179073539Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:24:59.180118 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:24:59.643973 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1728038802-merged.mount: Deactivated successfully. Mar 17 17:25:00.560902 containerd[1939]: time="2025-03-17T17:25:00.560810377Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 17:25:01.187716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2387486157.mount: Deactivated successfully. Mar 17 17:25:02.650111 containerd[1939]: time="2025-03-17T17:25:02.649259536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:02.651631 containerd[1939]: time="2025-03-17T17:25:02.651439420Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=29793524" Mar 17 17:25:02.652629 containerd[1939]: time="2025-03-17T17:25:02.652572040Z" level=info msg="ImageCreate event name:\"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:02.659525 containerd[1939]: time="2025-03-17T17:25:02.659427472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:02.662108 containerd[1939]: time="2025-03-17T17:25:02.661752796Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"29790324\" in 2.100061823s" Mar 17 17:25:02.662108 containerd[1939]: time="2025-03-17T17:25:02.661819492Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Mar 17 17:25:02.703905 containerd[1939]: time="2025-03-17T17:25:02.703842664Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 17:25:04.205397 containerd[1939]: time="2025-03-17T17:25:04.205340008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:04.207399 containerd[1939]: time="2025-03-17T17:25:04.207303184Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=26861167" Mar 17 17:25:04.208256 containerd[1939]: time="2025-03-17T17:25:04.208203964Z" level=info msg="ImageCreate event name:\"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:04.214424 containerd[1939]: time="2025-03-17T17:25:04.214361452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:04.217647 containerd[1939]: time="2025-03-17T17:25:04.217474744Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"28301963\" in 1.513544108s" Mar 17 17:25:04.217647 containerd[1939]: time="2025-03-17T17:25:04.217530676Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Mar 17 17:25:04.259201 containerd[1939]: time="2025-03-17T17:25:04.259083112Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 17:25:05.365312 containerd[1939]: time="2025-03-17T17:25:05.365239781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:05.367354 containerd[1939]: time="2025-03-17T17:25:05.367284941Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=16264636" Mar 17 17:25:05.368066 containerd[1939]: time="2025-03-17T17:25:05.367781645Z" level=info msg="ImageCreate event name:\"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:05.373367 containerd[1939]: time="2025-03-17T17:25:05.373285157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:05.377463 containerd[1939]: time="2025-03-17T17:25:05.377389565Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"17705450\" in 1.118248373s" Mar 17 17:25:05.377463 containerd[1939]: time="2025-03-17T17:25:05.377454041Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Mar 17 17:25:05.416212 containerd[1939]: time="2025-03-17T17:25:05.416130402Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:25:06.670249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2752950656.mount: Deactivated successfully. Mar 17 17:25:07.118053 containerd[1939]: time="2025-03-17T17:25:07.117861162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:07.119769 containerd[1939]: time="2025-03-17T17:25:07.119684958Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=25771848" Mar 17 17:25:07.121515 containerd[1939]: time="2025-03-17T17:25:07.121432266Z" level=info msg="ImageCreate event name:\"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:07.124923 containerd[1939]: time="2025-03-17T17:25:07.124821978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:07.126926 containerd[1939]: time="2025-03-17T17:25:07.126470010Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"25770867\" in 1.710254984s" Mar 17 17:25:07.126926 containerd[1939]: time="2025-03-17T17:25:07.126521490Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 17 17:25:07.167955 containerd[1939]: time="2025-03-17T17:25:07.167873610Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:25:07.586236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:25:07.595453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:07.794854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1811278938.mount: Deactivated successfully. Mar 17 17:25:07.966814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:07.980287 (kubelet)[2606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:25:08.101082 kubelet[2606]: E0317 17:25:08.100424 2606 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:25:08.106338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:25:08.106761 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:25:09.025481 containerd[1939]: time="2025-03-17T17:25:09.025151347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:09.027335 containerd[1939]: time="2025-03-17T17:25:09.027260659Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Mar 17 17:25:09.028075 containerd[1939]: time="2025-03-17T17:25:09.027715759Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:09.038859 containerd[1939]: time="2025-03-17T17:25:09.038226536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:09.043630 containerd[1939]: time="2025-03-17T17:25:09.043551824Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.875605086s" Mar 17 17:25:09.043630 containerd[1939]: time="2025-03-17T17:25:09.043621532Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 17:25:09.082774 containerd[1939]: time="2025-03-17T17:25:09.082601168Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 17:25:09.566685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1387801180.mount: Deactivated successfully. Mar 17 17:25:09.573634 containerd[1939]: time="2025-03-17T17:25:09.573358918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:09.574970 containerd[1939]: time="2025-03-17T17:25:09.574892086Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Mar 17 17:25:09.575758 containerd[1939]: time="2025-03-17T17:25:09.575675230Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:09.580968 containerd[1939]: time="2025-03-17T17:25:09.580894030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:09.582643 containerd[1939]: time="2025-03-17T17:25:09.582446914Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 499.786958ms" Mar 17 17:25:09.582643 containerd[1939]: time="2025-03-17T17:25:09.582501466Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Mar 17 17:25:09.626291 containerd[1939]: time="2025-03-17T17:25:09.626189902Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 17:25:10.148113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount573949148.mount: Deactivated successfully. Mar 17 17:25:12.548852 containerd[1939]: time="2025-03-17T17:25:12.548771053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:12.551136 containerd[1939]: time="2025-03-17T17:25:12.551059753Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Mar 17 17:25:12.553091 containerd[1939]: time="2025-03-17T17:25:12.553041445Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:12.560470 containerd[1939]: time="2025-03-17T17:25:12.560375365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:12.562855 containerd[1939]: time="2025-03-17T17:25:12.562669549Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.936426511s" Mar 17 17:25:12.562855 containerd[1939]: time="2025-03-17T17:25:12.562722589Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Mar 17 17:25:12.787947 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 17 17:25:18.357087 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 17:25:18.365579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:18.661540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:18.673495 (kubelet)[2773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:25:18.758060 kubelet[2773]: E0317 17:25:18.756162 2773 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:25:18.759170 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:25:18.759463 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:25:18.875465 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:18.884518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:18.933481 systemd[1]: Reloading requested from client PID 2787 ('systemctl') (unit session-9.scope)... Mar 17 17:25:18.933506 systemd[1]: Reloading... Mar 17 17:25:19.146086 zram_generator::config[2830]: No configuration found. Mar 17 17:25:19.387856 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:25:19.552754 systemd[1]: Reloading finished in 618 ms. Mar 17 17:25:19.653292 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 17:25:19.653490 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 17:25:19.654359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:19.666765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:19.935699 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:19.960585 (kubelet)[2890]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:25:20.032935 kubelet[2890]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:25:20.034096 kubelet[2890]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:25:20.034096 kubelet[2890]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:25:20.034096 kubelet[2890]: I0317 17:25:20.033603 2890 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:25:21.465391 kubelet[2890]: I0317 17:25:21.465318 2890 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:25:21.465391 kubelet[2890]: I0317 17:25:21.465377 2890 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:25:21.465986 kubelet[2890]: I0317 17:25:21.465708 2890 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:25:21.491702 kubelet[2890]: E0317 17:25:21.491621 2890 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.87:6443: connect: connection refused Mar 17 17:25:21.492312 kubelet[2890]: I0317 17:25:21.492122 2890 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:25:21.505789 kubelet[2890]: I0317 17:25:21.505723 2890 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:25:21.506312 kubelet[2890]: I0317 17:25:21.506247 2890 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:25:21.506600 kubelet[2890]: I0317 17:25:21.506313 2890 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-87","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:25:21.506781 kubelet[2890]: I0317 17:25:21.506619 2890 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:25:21.506781 kubelet[2890]: I0317 17:25:21.506639 2890 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:25:21.506885 kubelet[2890]: I0317 17:25:21.506870 2890 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:25:21.509854 kubelet[2890]: I0317 17:25:21.508390 2890 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:25:21.509854 kubelet[2890]: I0317 17:25:21.508995 2890 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:25:21.509854 kubelet[2890]: I0317 17:25:21.509145 2890 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:25:21.509854 kubelet[2890]: W0317 17:25:21.509133 2890 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-87&limit=500&resourceVersion=0": dial tcp 172.31.30.87:6443: connect: connection refused Mar 17 17:25:21.509854 kubelet[2890]: I0317 17:25:21.509218 2890 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:25:21.509854 kubelet[2890]: E0317 17:25:21.509218 2890 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-87&limit=500&resourceVersion=0": dial tcp 172.31.30.87:6443: connect: connection refused Mar 17 17:25:21.511699 kubelet[2890]: W0317 17:25:21.511597 2890 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.87:6443: connect: connection refused Mar 17 17:25:21.511699 kubelet[2890]: E0317 17:25:21.511700 2890 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.87:6443: connect: connection refused Mar 17 17:25:21.515444 kubelet[2890]: I0317 17:25:21.515396 2890 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:25:21.515969 kubelet[2890]: I0317 17:25:21.515945 2890 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:25:21.516164 kubelet[2890]: W0317 17:25:21.516143 2890 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:25:21.518150 kubelet[2890]: I0317 17:25:21.518114 2890 server.go:1264] "Started kubelet" Mar 17 17:25:21.528708 kubelet[2890]: I0317 17:25:21.528669 2890 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:25:21.532165 kubelet[2890]: E0317 17:25:21.531762 2890 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.87:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.87:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-87.182da710d049f5d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-87,UID:ip-172-31-30-87,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-87,},FirstTimestamp:2025-03-17 17:25:21.518073298 +0000 UTC m=+1.550725593,LastTimestamp:2025-03-17 17:25:21.518073298 +0000 UTC m=+1.550725593,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-87,}" Mar 17 17:25:21.532165 kubelet[2890]: I0317 17:25:21.526889 2890 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:25:21.534460 kubelet[2890]: I0317 17:25:21.534144 2890 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:25:21.536400 kubelet[2890]: I0317 17:25:21.536302 2890 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:25:21.536790 kubelet[2890]: I0317 17:25:21.536687 2890 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:25:21.540282 kubelet[2890]: I0317 17:25:21.540249 2890 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:25:21.543170 kubelet[2890]: E0317 17:25:21.542190 2890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-87?timeout=10s\": dial tcp 172.31.30.87:6443: connect: connection refused" interval="200ms" Mar 17 17:25:21.543170 kubelet[2890]: I0317 17:25:21.542445 2890 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:25:21.545585 kubelet[2890]: W0317 17:25:21.545516 2890 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.87:6443: connect: connection refused Mar 17 17:25:21.545585 kubelet[2890]: E0317 17:25:21.545586 2890 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.87:6443: connect: connection refused Mar 17 17:25:21.546413 kubelet[2890]: I0317 17:25:21.545934 2890 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:25:21.546413 kubelet[2890]: I0317 17:25:21.546106 2890 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:25:21.549415 kubelet[2890]: I0317 17:25:21.549320 2890 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:25:21.550126 kubelet[2890]: I0317 17:25:21.549935 2890 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:25:21.560376 kubelet[2890]: E0317 17:25:21.560323 2890 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:25:21.573330 kubelet[2890]: I0317 17:25:21.572949 2890 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:25:21.577784 kubelet[2890]: I0317 17:25:21.577672 2890 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:25:21.577784 kubelet[2890]: I0317 17:25:21.577713 2890 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:25:21.577784 kubelet[2890]: I0317 17:25:21.577744 2890 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:25:21.578303 kubelet[2890]: E0317 17:25:21.578152 2890 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:25:21.581873 kubelet[2890]: W0317 17:25:21.581332 2890 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.87:6443: connect: connection refused Mar 17 17:25:21.581873 kubelet[2890]: E0317 17:25:21.581401 2890 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.87:6443: connect: connection refused Mar 17 17:25:21.586247 kubelet[2890]: I0317 17:25:21.586195 2890 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:25:21.586599 kubelet[2890]: I0317 17:25:21.586539 2890 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:25:21.586673 kubelet[2890]: I0317 17:25:21.586606 2890 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:25:21.590535 kubelet[2890]: I0317 17:25:21.590498 2890 policy_none.go:49] "None policy: Start" Mar 17 17:25:21.591997 kubelet[2890]: I0317 17:25:21.591905 2890 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:25:21.591997 kubelet[2890]: I0317 17:25:21.591950 2890 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:25:21.604596 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:25:21.624945 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:25:21.631368 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:25:21.638856 kubelet[2890]: I0317 17:25:21.638644 2890 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:25:21.639052 kubelet[2890]: I0317 17:25:21.638961 2890 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:25:21.639212 kubelet[2890]: I0317 17:25:21.639161 2890 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:25:21.643094 kubelet[2890]: E0317 17:25:21.642872 2890 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-87\" not found" Mar 17 17:25:21.645206 kubelet[2890]: I0317 17:25:21.644685 2890 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-87" Mar 17 17:25:21.646601 kubelet[2890]: E0317 17:25:21.646494 2890 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.87:6443/api/v1/nodes\": dial tcp 172.31.30.87:6443: connect: connection refused" node="ip-172-31-30-87" Mar 17 17:25:21.678766 kubelet[2890]: I0317 17:25:21.678707 2890 topology_manager.go:215] "Topology Admit Handler" podUID="6d5e5c799e4cf70fc35ecebd38f02245" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-87" Mar 17 17:25:21.681244 kubelet[2890]: I0317 17:25:21.681119 2890 topology_manager.go:215] "Topology Admit Handler" podUID="b9983be638f73e0eb4b33d76b26f1a6b" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-87" Mar 17 17:25:21.683542 kubelet[2890]: I0317 17:25:21.683180 2890 topology_manager.go:215] "Topology Admit Handler" podUID="d00b2260c12c596cd41c4f6543a843bb" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-87" Mar 17 17:25:21.697076 systemd[1]: Created slice kubepods-burstable-pod6d5e5c799e4cf70fc35ecebd38f02245.slice - libcontainer container kubepods-burstable-pod6d5e5c799e4cf70fc35ecebd38f02245.slice. Mar 17 17:25:21.722298 systemd[1]: Created slice kubepods-burstable-podd00b2260c12c596cd41c4f6543a843bb.slice - libcontainer container kubepods-burstable-podd00b2260c12c596cd41c4f6543a843bb.slice. Mar 17 17:25:21.732879 systemd[1]: Created slice kubepods-burstable-podb9983be638f73e0eb4b33d76b26f1a6b.slice - libcontainer container kubepods-burstable-podb9983be638f73e0eb4b33d76b26f1a6b.slice. Mar 17 17:25:21.743373 kubelet[2890]: E0317 17:25:21.743283 2890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-87?timeout=10s\": dial tcp 172.31.30.87:6443: connect: connection refused" interval="400ms" Mar 17 17:25:21.749908 kubelet[2890]: I0317 17:25:21.749868 2890 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d00b2260c12c596cd41c4f6543a843bb-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-87\" (UID: \"d00b2260c12c596cd41c4f6543a843bb\") " pod="kube-system/kube-scheduler-ip-172-31-30-87" Mar 17 17:25:21.750115 kubelet[2890]: I0317 17:25:21.749927 2890 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d5e5c799e4cf70fc35ecebd38f02245-ca-certs\") pod \"kube-apiserver-ip-172-31-30-87\" (UID: \"6d5e5c799e4cf70fc35ecebd38f02245\") " pod="kube-system/kube-apiserver-ip-172-31-30-87" Mar 17 17:25:21.750115 kubelet[2890]: I0317 17:25:21.749965 2890 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d5e5c799e4cf70fc35ecebd38f02245-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-87\" (UID: \"6d5e5c799e4cf70fc35ecebd38f02245\") " pod="kube-system/kube-apiserver-ip-172-31-30-87" Mar 17 17:25:21.750115 kubelet[2890]: I0317 17:25:21.750001 2890 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d5e5c799e4cf70fc35ecebd38f02245-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-87\" (UID: \"6d5e5c799e4cf70fc35ecebd38f02245\") " pod="kube-system/kube-apiserver-ip-172-31-30-87" Mar 17 17:25:21.750115 kubelet[2890]: I0317 17:25:21.750082 2890 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9983be638f73e0eb4b33d76b26f1a6b-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-87\" (UID: \"b9983be638f73e0eb4b33d76b26f1a6b\") " pod="kube-system/kube-controller-manager-ip-172-31-30-87" Mar 17 17:25:21.750320 kubelet[2890]: I0317 17:25:21.750133 2890 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b9983be638f73e0eb4b33d76b26f1a6b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-87\" (UID: \"b9983be638f73e0eb4b33d76b26f1a6b\") " pod="kube-system/kube-controller-manager-ip-172-31-30-87" Mar 17 17:25:21.750320 kubelet[2890]: I0317 17:25:21.750170 2890 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9983be638f73e0eb4b33d76b26f1a6b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-87\" (UID: \"b9983be638f73e0eb4b33d76b26f1a6b\") " pod="kube-system/kube-controller-manager-ip-172-31-30-87" Mar 17 17:25:21.750320 kubelet[2890]: I0317 17:25:21.750205 2890 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9983be638f73e0eb4b33d76b26f1a6b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-87\" (UID: \"b9983be638f73e0eb4b33d76b26f1a6b\") " pod="kube-system/kube-controller-manager-ip-172-31-30-87" Mar 17 17:25:21.750320 kubelet[2890]: I0317 17:25:21.750240 2890 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b9983be638f73e0eb4b33d76b26f1a6b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-87\" (UID: \"b9983be638f73e0eb4b33d76b26f1a6b\") " pod="kube-system/kube-controller-manager-ip-172-31-30-87" Mar 17 17:25:21.849399 kubelet[2890]: I0317 17:25:21.849351 2890 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-87" Mar 17 17:25:21.849854 kubelet[2890]: E0317 17:25:21.849800 2890 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.87:6443/api/v1/nodes\": dial tcp 172.31.30.87:6443: connect: connection refused" node="ip-172-31-30-87" Mar 17 17:25:22.017901 containerd[1939]: time="2025-03-17T17:25:22.017680400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-87,Uid:6d5e5c799e4cf70fc35ecebd38f02245,Namespace:kube-system,Attempt:0,}" Mar 17 17:25:22.029939 containerd[1939]: time="2025-03-17T17:25:22.029550008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-87,Uid:d00b2260c12c596cd41c4f6543a843bb,Namespace:kube-system,Attempt:0,}" Mar 17 17:25:22.038698 containerd[1939]: time="2025-03-17T17:25:22.038344400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-87,Uid:b9983be638f73e0eb4b33d76b26f1a6b,Namespace:kube-system,Attempt:0,}" Mar 17 17:25:22.144417 kubelet[2890]: E0317 17:25:22.144351 2890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-87?timeout=10s\": dial tcp 172.31.30.87:6443: connect: connection refused" interval="800ms" Mar 17 17:25:22.252676 kubelet[2890]: I0317 17:25:22.252115 2890 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-87" Mar 17 17:25:22.252676 kubelet[2890]: E0317 17:25:22.252532 2890 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.87:6443/api/v1/nodes\": dial tcp 172.31.30.87:6443: connect: connection refused" node="ip-172-31-30-87" Mar 17 17:25:22.493917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1750992271.mount: Deactivated successfully. Mar 17 17:25:22.501976 containerd[1939]: time="2025-03-17T17:25:22.501919522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:25:22.503846 containerd[1939]: time="2025-03-17T17:25:22.503776138Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:25:22.507471 containerd[1939]: time="2025-03-17T17:25:22.507404038Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 17 17:25:22.508340 containerd[1939]: time="2025-03-17T17:25:22.508288318Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:25:22.511107 containerd[1939]: time="2025-03-17T17:25:22.510829918Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:25:22.511107 containerd[1939]: time="2025-03-17T17:25:22.511000006Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:25:22.517831 containerd[1939]: time="2025-03-17T17:25:22.517768762Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:25:22.519848 containerd[1939]: time="2025-03-17T17:25:22.519777946Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 481.324166ms" Mar 17 17:25:22.524964 containerd[1939]: time="2025-03-17T17:25:22.524903555Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 507.118059ms" Mar 17 17:25:22.527230 containerd[1939]: time="2025-03-17T17:25:22.527166911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:25:22.529412 containerd[1939]: time="2025-03-17T17:25:22.529349303Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 499.692939ms" Mar 17 17:25:22.693801 containerd[1939]: time="2025-03-17T17:25:22.692758643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:25:22.693801 containerd[1939]: time="2025-03-17T17:25:22.692912771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:25:22.693801 containerd[1939]: time="2025-03-17T17:25:22.692949683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:22.693801 containerd[1939]: time="2025-03-17T17:25:22.693135899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:22.694172 containerd[1939]: time="2025-03-17T17:25:22.693634067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:25:22.694172 containerd[1939]: time="2025-03-17T17:25:22.693793331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:25:22.694172 containerd[1939]: time="2025-03-17T17:25:22.693835835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:22.694172 containerd[1939]: time="2025-03-17T17:25:22.693986663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:22.699706 containerd[1939]: time="2025-03-17T17:25:22.699547127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:25:22.699706 containerd[1939]: time="2025-03-17T17:25:22.699654611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:25:22.700633 containerd[1939]: time="2025-03-17T17:25:22.699910859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:22.700633 containerd[1939]: time="2025-03-17T17:25:22.700174151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:22.739416 systemd[1]: Started cri-containerd-80a5ffb016f16d0825d0d35cf18c3e2568aa97fb18d4929e3df6a4ebecc9ed4a.scope - libcontainer container 80a5ffb016f16d0825d0d35cf18c3e2568aa97fb18d4929e3df6a4ebecc9ed4a. Mar 17 17:25:22.783728 systemd[1]: Started cri-containerd-16197dd13e34c51dd85054734fe7db2cde17df76e0ef24cbb5c6f9a05f2ff1e9.scope - libcontainer container 16197dd13e34c51dd85054734fe7db2cde17df76e0ef24cbb5c6f9a05f2ff1e9. Mar 17 17:25:22.793953 kubelet[2890]: W0317 17:25:22.793522 2890 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.87:6443: connect: connection refused Mar 17 17:25:22.793953 kubelet[2890]: E0317 17:25:22.793619 2890 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.87:6443: connect: connection refused Mar 17 17:25:22.799496 systemd[1]: Started cri-containerd-80e67ef1356662287a163e5521b702acb2845901d3e34d4d0276e8363962af63.scope - libcontainer container 80e67ef1356662287a163e5521b702acb2845901d3e34d4d0276e8363962af63. Mar 17 17:25:22.893566 containerd[1939]: time="2025-03-17T17:25:22.893314332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-87,Uid:b9983be638f73e0eb4b33d76b26f1a6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"80a5ffb016f16d0825d0d35cf18c3e2568aa97fb18d4929e3df6a4ebecc9ed4a\"" Mar 17 17:25:22.922652 containerd[1939]: time="2025-03-17T17:25:22.922104900Z" level=info msg="CreateContainer within sandbox \"80a5ffb016f16d0825d0d35cf18c3e2568aa97fb18d4929e3df6a4ebecc9ed4a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:25:22.925173 kubelet[2890]: W0317 17:25:22.925076 2890 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.87:6443: connect: connection refused Mar 17 17:25:22.925349 kubelet[2890]: E0317 17:25:22.925183 2890 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.87:6443: connect: connection refused Mar 17 17:25:22.931740 containerd[1939]: time="2025-03-17T17:25:22.931672645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-87,Uid:d00b2260c12c596cd41c4f6543a843bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"16197dd13e34c51dd85054734fe7db2cde17df76e0ef24cbb5c6f9a05f2ff1e9\"" Mar 17 17:25:22.938137 containerd[1939]: time="2025-03-17T17:25:22.937958449Z" level=info msg="CreateContainer within sandbox \"16197dd13e34c51dd85054734fe7db2cde17df76e0ef24cbb5c6f9a05f2ff1e9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:25:22.943950 containerd[1939]: time="2025-03-17T17:25:22.943872997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-87,Uid:6d5e5c799e4cf70fc35ecebd38f02245,Namespace:kube-system,Attempt:0,} returns sandbox id \"80e67ef1356662287a163e5521b702acb2845901d3e34d4d0276e8363962af63\"" Mar 17 17:25:22.945636 kubelet[2890]: E0317 17:25:22.945547 2890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-87?timeout=10s\": dial tcp 172.31.30.87:6443: connect: connection refused" interval="1.6s" Mar 17 17:25:22.953756 containerd[1939]: time="2025-03-17T17:25:22.953692369Z" level=info msg="CreateContainer within sandbox \"80e67ef1356662287a163e5521b702acb2845901d3e34d4d0276e8363962af63\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:25:22.985543 containerd[1939]: time="2025-03-17T17:25:22.985464037Z" level=info msg="CreateContainer within sandbox \"80a5ffb016f16d0825d0d35cf18c3e2568aa97fb18d4929e3df6a4ebecc9ed4a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e0dd82a2bb20f06dde3a7730c5db6c6400a35e77b6c88616205eb881f5995cc0\"" Mar 17 17:25:22.986762 containerd[1939]: time="2025-03-17T17:25:22.986645005Z" level=info msg="StartContainer for \"e0dd82a2bb20f06dde3a7730c5db6c6400a35e77b6c88616205eb881f5995cc0\"" Mar 17 17:25:22.990980 containerd[1939]: time="2025-03-17T17:25:22.990785581Z" level=info msg="CreateContainer within sandbox \"16197dd13e34c51dd85054734fe7db2cde17df76e0ef24cbb5c6f9a05f2ff1e9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"349971740e99e48a3e1c94a99d8c471aff1d536d7305a1e037424c2d401d53c3\"" Mar 17 17:25:22.991919 containerd[1939]: time="2025-03-17T17:25:22.991659121Z" level=info msg="StartContainer for \"349971740e99e48a3e1c94a99d8c471aff1d536d7305a1e037424c2d401d53c3\"" Mar 17 17:25:23.012898 containerd[1939]: time="2025-03-17T17:25:23.012775305Z" level=info msg="CreateContainer within sandbox \"80e67ef1356662287a163e5521b702acb2845901d3e34d4d0276e8363962af63\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"833a9481437c71ca712d083b576bb2b375fac2e9a70c76f062dc9d1995967426\"" Mar 17 17:25:23.013684 containerd[1939]: time="2025-03-17T17:25:23.013475829Z" level=info msg="StartContainer for \"833a9481437c71ca712d083b576bb2b375fac2e9a70c76f062dc9d1995967426\"" Mar 17 17:25:23.034236 kubelet[2890]: W0317 17:25:23.034055 2890 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-87&limit=500&resourceVersion=0": dial tcp 172.31.30.87:6443: connect: connection refused Mar 17 17:25:23.034236 kubelet[2890]: E0317 17:25:23.034157 2890 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-87&limit=500&resourceVersion=0": dial tcp 172.31.30.87:6443: connect: connection refused Mar 17 17:25:23.045421 systemd[1]: Started cri-containerd-e0dd82a2bb20f06dde3a7730c5db6c6400a35e77b6c88616205eb881f5995cc0.scope - libcontainer container e0dd82a2bb20f06dde3a7730c5db6c6400a35e77b6c88616205eb881f5995cc0. Mar 17 17:25:23.056904 kubelet[2890]: I0317 17:25:23.056665 2890 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-87" Mar 17 17:25:23.059100 kubelet[2890]: E0317 17:25:23.057949 2890 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.87:6443/api/v1/nodes\": dial tcp 172.31.30.87:6443: connect: connection refused" node="ip-172-31-30-87" Mar 17 17:25:23.094310 systemd[1]: Started cri-containerd-349971740e99e48a3e1c94a99d8c471aff1d536d7305a1e037424c2d401d53c3.scope - libcontainer container 349971740e99e48a3e1c94a99d8c471aff1d536d7305a1e037424c2d401d53c3. Mar 17 17:25:23.116359 systemd[1]: Started cri-containerd-833a9481437c71ca712d083b576bb2b375fac2e9a70c76f062dc9d1995967426.scope - libcontainer container 833a9481437c71ca712d083b576bb2b375fac2e9a70c76f062dc9d1995967426. Mar 17 17:25:23.133789 kubelet[2890]: W0317 17:25:23.133693 2890 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.87:6443: connect: connection refused Mar 17 17:25:23.133789 kubelet[2890]: E0317 17:25:23.133793 2890 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.87:6443: connect: connection refused Mar 17 17:25:23.173908 containerd[1939]: time="2025-03-17T17:25:23.173850298Z" level=info msg="StartContainer for \"e0dd82a2bb20f06dde3a7730c5db6c6400a35e77b6c88616205eb881f5995cc0\" returns successfully" Mar 17 17:25:23.221998 containerd[1939]: time="2025-03-17T17:25:23.221614618Z" level=info msg="StartContainer for \"833a9481437c71ca712d083b576bb2b375fac2e9a70c76f062dc9d1995967426\" returns successfully" Mar 17 17:25:23.252455 containerd[1939]: time="2025-03-17T17:25:23.252096346Z" level=info msg="StartContainer for \"349971740e99e48a3e1c94a99d8c471aff1d536d7305a1e037424c2d401d53c3\" returns successfully" Mar 17 17:25:24.661937 kubelet[2890]: I0317 17:25:24.661898 2890 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-87" Mar 17 17:25:26.929167 update_engine[1917]: I20250317 17:25:26.929075 1917 update_attempter.cc:509] Updating boot flags... Mar 17 17:25:27.044141 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3182) Mar 17 17:25:27.066383 kubelet[2890]: E0317 17:25:27.066332 2890 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-87\" not found" node="ip-172-31-30-87" Mar 17 17:25:27.129056 kubelet[2890]: E0317 17:25:27.124868 2890 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-30-87.182da710d049f5d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-87,UID:ip-172-31-30-87,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-87,},FirstTimestamp:2025-03-17 17:25:21.518073298 +0000 UTC m=+1.550725593,LastTimestamp:2025-03-17 17:25:21.518073298 +0000 UTC m=+1.550725593,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-87,}" Mar 17 17:25:27.160009 kubelet[2890]: I0317 17:25:27.158723 2890 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-87" Mar 17 17:25:27.297058 kubelet[2890]: E0317 17:25:27.292365 2890 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-30-87.182da710d2ce4ffa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-87,UID:ip-172-31-30-87,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-30-87,},FirstTimestamp:2025-03-17 17:25:21.560301562 +0000 UTC m=+1.592953869,LastTimestamp:2025-03-17 17:25:21.560301562 +0000 UTC m=+1.592953869,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-87,}" Mar 17 17:25:27.472058 kubelet[2890]: E0317 17:25:27.470373 2890 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-30-87.182da710d4420bde default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-87,UID:ip-172-31-30-87,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-30-87 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-30-87,},FirstTimestamp:2025-03-17 17:25:21.584663518 +0000 UTC m=+1.617315801,LastTimestamp:2025-03-17 17:25:21.584663518 +0000 UTC m=+1.617315801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-87,}" Mar 17 17:25:27.518052 kubelet[2890]: I0317 17:25:27.514652 2890 apiserver.go:52] "Watching apiserver" Mar 17 17:25:27.543546 kubelet[2890]: I0317 17:25:27.543479 2890 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:25:27.562079 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3173) Mar 17 17:25:28.039086 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3173) Mar 17 17:25:29.917309 systemd[1]: Reloading requested from client PID 3436 ('systemctl') (unit session-9.scope)... Mar 17 17:25:29.917339 systemd[1]: Reloading... Mar 17 17:25:30.164140 zram_generator::config[3476]: No configuration found. Mar 17 17:25:30.408530 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:25:30.621703 systemd[1]: Reloading finished in 703 ms. Mar 17 17:25:30.715240 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:30.731802 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:25:30.732352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:30.732443 systemd[1]: kubelet.service: Consumed 2.339s CPU time, 111.6M memory peak, 0B memory swap peak. Mar 17 17:25:30.739783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:31.057403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:31.069908 (kubelet)[3536]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:25:31.193242 kubelet[3536]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:25:31.193741 kubelet[3536]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:25:31.193831 kubelet[3536]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:25:31.195075 kubelet[3536]: I0317 17:25:31.194060 3536 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:25:31.203446 kubelet[3536]: I0317 17:25:31.203400 3536 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:25:31.204377 kubelet[3536]: I0317 17:25:31.204181 3536 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:25:31.204763 kubelet[3536]: I0317 17:25:31.204718 3536 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:25:31.208625 kubelet[3536]: I0317 17:25:31.208501 3536 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:25:31.212499 kubelet[3536]: I0317 17:25:31.211095 3536 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:25:31.233195 kubelet[3536]: I0317 17:25:31.233132 3536 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:25:31.233647 kubelet[3536]: I0317 17:25:31.233589 3536 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:25:31.234471 kubelet[3536]: I0317 17:25:31.233645 3536 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-87","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:25:31.234471 kubelet[3536]: I0317 17:25:31.234012 3536 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:25:31.234471 kubelet[3536]: I0317 17:25:31.234072 3536 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:25:31.234471 kubelet[3536]: I0317 17:25:31.234133 3536 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:25:31.234471 kubelet[3536]: I0317 17:25:31.234466 3536 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:25:31.235481 kubelet[3536]: I0317 17:25:31.234497 3536 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:25:31.235481 kubelet[3536]: I0317 17:25:31.234553 3536 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:25:31.235481 kubelet[3536]: I0317 17:25:31.234582 3536 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:25:31.244882 kubelet[3536]: I0317 17:25:31.244530 3536 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:25:31.247497 kubelet[3536]: I0317 17:25:31.247440 3536 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:25:31.252700 kubelet[3536]: I0317 17:25:31.250552 3536 server.go:1264] "Started kubelet" Mar 17 17:25:31.252700 kubelet[3536]: I0317 17:25:31.251441 3536 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:25:31.252988 kubelet[3536]: I0317 17:25:31.252956 3536 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:25:31.254582 kubelet[3536]: I0317 17:25:31.254527 3536 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:25:31.257268 kubelet[3536]: I0317 17:25:31.257214 3536 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:25:31.257743 kubelet[3536]: I0317 17:25:31.257718 3536 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:25:31.270899 kubelet[3536]: I0317 17:25:31.270831 3536 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:25:31.271901 kubelet[3536]: I0317 17:25:31.271852 3536 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:25:31.272532 kubelet[3536]: I0317 17:25:31.272498 3536 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:25:31.283763 sudo[3550]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 17:25:31.284691 sudo[3550]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 17:25:31.312850 kubelet[3536]: I0317 17:25:31.308167 3536 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:25:31.312850 kubelet[3536]: I0317 17:25:31.309765 3536 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:25:31.312850 kubelet[3536]: I0317 17:25:31.309905 3536 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:25:31.313212 kubelet[3536]: I0317 17:25:31.313162 3536 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:25:31.313296 kubelet[3536]: I0317 17:25:31.313237 3536 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:25:31.313296 kubelet[3536]: I0317 17:25:31.313277 3536 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:25:31.313414 kubelet[3536]: E0317 17:25:31.313354 3536 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:25:31.358955 kubelet[3536]: I0317 17:25:31.358874 3536 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:25:31.359726 kubelet[3536]: E0317 17:25:31.359690 3536 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:25:31.398494 kubelet[3536]: I0317 17:25:31.398434 3536 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-87" Mar 17 17:25:31.415824 kubelet[3536]: E0317 17:25:31.415784 3536 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:25:31.417427 kubelet[3536]: I0317 17:25:31.416626 3536 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-30-87" Mar 17 17:25:31.417427 kubelet[3536]: I0317 17:25:31.416748 3536 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-87" Mar 17 17:25:31.483677 kubelet[3536]: I0317 17:25:31.483189 3536 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:25:31.483677 kubelet[3536]: I0317 17:25:31.483219 3536 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:25:31.483677 kubelet[3536]: I0317 17:25:31.483253 3536 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:25:31.483677 kubelet[3536]: I0317 17:25:31.483496 3536 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:25:31.483677 kubelet[3536]: I0317 17:25:31.483517 3536 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:25:31.483677 kubelet[3536]: I0317 17:25:31.483554 3536 policy_none.go:49] "None policy: Start" Mar 17 17:25:31.487461 kubelet[3536]: I0317 17:25:31.486013 3536 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:25:31.487461 kubelet[3536]: I0317 17:25:31.486082 3536 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:25:31.487461 kubelet[3536]: I0317 17:25:31.486351 3536 state_mem.go:75] "Updated machine memory state" Mar 17 17:25:31.498050 kubelet[3536]: I0317 17:25:31.497797 3536 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:25:31.502351 kubelet[3536]: I0317 17:25:31.499811 3536 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:25:31.502846 kubelet[3536]: I0317 17:25:31.502813 3536 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:25:31.617376 kubelet[3536]: I0317 17:25:31.617148 3536 topology_manager.go:215] "Topology Admit Handler" podUID="6d5e5c799e4cf70fc35ecebd38f02245" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-87" Mar 17 17:25:31.618508 kubelet[3536]: I0317 17:25:31.618444 3536 topology_manager.go:215] "Topology Admit Handler" podUID="b9983be638f73e0eb4b33d76b26f1a6b" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-87" Mar 17 17:25:31.618634 kubelet[3536]: I0317 17:25:31.618549 3536 topology_manager.go:215] "Topology Admit Handler" podUID="d00b2260c12c596cd41c4f6543a843bb" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-87" Mar 17 17:25:31.632881 kubelet[3536]: E0317 17:25:31.632742 3536 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-30-87\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-87" Mar 17 17:25:31.690113 kubelet[3536]: I0317 17:25:31.689818 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d00b2260c12c596cd41c4f6543a843bb-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-87\" (UID: \"d00b2260c12c596cd41c4f6543a843bb\") " pod="kube-system/kube-scheduler-ip-172-31-30-87" Mar 17 17:25:31.690113 kubelet[3536]: I0317 17:25:31.689901 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d5e5c799e4cf70fc35ecebd38f02245-ca-certs\") pod \"kube-apiserver-ip-172-31-30-87\" (UID: \"6d5e5c799e4cf70fc35ecebd38f02245\") " pod="kube-system/kube-apiserver-ip-172-31-30-87" Mar 17 17:25:31.690920 kubelet[3536]: I0317 17:25:31.689944 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9983be638f73e0eb4b33d76b26f1a6b-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-87\" (UID: \"b9983be638f73e0eb4b33d76b26f1a6b\") " pod="kube-system/kube-controller-manager-ip-172-31-30-87" Mar 17 17:25:31.691178 kubelet[3536]: I0317 17:25:31.691000 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b9983be638f73e0eb4b33d76b26f1a6b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-87\" (UID: \"b9983be638f73e0eb4b33d76b26f1a6b\") " pod="kube-system/kube-controller-manager-ip-172-31-30-87" Mar 17 17:25:31.691358 kubelet[3536]: I0317 17:25:31.691241 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b9983be638f73e0eb4b33d76b26f1a6b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-87\" (UID: \"b9983be638f73e0eb4b33d76b26f1a6b\") " pod="kube-system/kube-controller-manager-ip-172-31-30-87" Mar 17 17:25:31.691716 kubelet[3536]: I0317 17:25:31.691508 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9983be638f73e0eb4b33d76b26f1a6b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-87\" (UID: \"b9983be638f73e0eb4b33d76b26f1a6b\") " pod="kube-system/kube-controller-manager-ip-172-31-30-87" Mar 17 17:25:31.691716 kubelet[3536]: I0317 17:25:31.691614 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d5e5c799e4cf70fc35ecebd38f02245-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-87\" (UID: \"6d5e5c799e4cf70fc35ecebd38f02245\") " pod="kube-system/kube-apiserver-ip-172-31-30-87" Mar 17 17:25:31.691966 kubelet[3536]: I0317 17:25:31.691803 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d5e5c799e4cf70fc35ecebd38f02245-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-87\" (UID: \"6d5e5c799e4cf70fc35ecebd38f02245\") " pod="kube-system/kube-apiserver-ip-172-31-30-87" Mar 17 17:25:31.692291 kubelet[3536]: I0317 17:25:31.692204 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9983be638f73e0eb4b33d76b26f1a6b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-87\" (UID: \"b9983be638f73e0eb4b33d76b26f1a6b\") " pod="kube-system/kube-controller-manager-ip-172-31-30-87" Mar 17 17:25:32.157147 sudo[3550]: pam_unix(sudo:session): session closed for user root Mar 17 17:25:32.236004 kubelet[3536]: I0317 17:25:32.235656 3536 apiserver.go:52] "Watching apiserver" Mar 17 17:25:32.273913 kubelet[3536]: I0317 17:25:32.273859 3536 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:25:32.500934 kubelet[3536]: I0317 17:25:32.500835 3536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-87" podStartSLOduration=1.500811764 podStartE2EDuration="1.500811764s" podCreationTimestamp="2025-03-17 17:25:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:25:32.485800916 +0000 UTC m=+1.403280344" watchObservedRunningTime="2025-03-17 17:25:32.500811764 +0000 UTC m=+1.418291180" Mar 17 17:25:32.504885 kubelet[3536]: I0317 17:25:32.504767 3536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-87" podStartSLOduration=1.504743264 podStartE2EDuration="1.504743264s" podCreationTimestamp="2025-03-17 17:25:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:25:32.498771104 +0000 UTC m=+1.416250520" watchObservedRunningTime="2025-03-17 17:25:32.504743264 +0000 UTC m=+1.422222680" Mar 17 17:25:32.545044 kubelet[3536]: I0317 17:25:32.544716 3536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-87" podStartSLOduration=2.544694528 podStartE2EDuration="2.544694528s" podCreationTimestamp="2025-03-17 17:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:25:32.529596344 +0000 UTC m=+1.447075772" watchObservedRunningTime="2025-03-17 17:25:32.544694528 +0000 UTC m=+1.462173932" Mar 17 17:25:34.789776 sudo[2289]: pam_unix(sudo:session): session closed for user root Mar 17 17:25:34.813095 sshd[2288]: Connection closed by 139.178.68.195 port 58960 Mar 17 17:25:34.814070 sshd-session[2286]: pam_unix(sshd:session): session closed for user core Mar 17 17:25:34.821161 systemd[1]: sshd@8-172.31.30.87:22-139.178.68.195:58960.service: Deactivated successfully. Mar 17 17:25:34.826572 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:25:34.826942 systemd[1]: session-9.scope: Consumed 10.173s CPU time, 187.4M memory peak, 0B memory swap peak. Mar 17 17:25:34.828292 systemd-logind[1916]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:25:34.830525 systemd-logind[1916]: Removed session 9. Mar 17 17:25:44.950232 kubelet[3536]: I0317 17:25:44.950178 3536 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:25:44.951532 kubelet[3536]: I0317 17:25:44.951095 3536 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:25:44.951600 containerd[1939]: time="2025-03-17T17:25:44.950751514Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:25:45.875744 kubelet[3536]: I0317 17:25:45.875669 3536 topology_manager.go:215] "Topology Admit Handler" podUID="3713f976-ff19-49b4-9c04-235a252cda6c" podNamespace="kube-system" podName="kube-proxy-dkjjv" Mar 17 17:25:45.897755 systemd[1]: Created slice kubepods-besteffort-pod3713f976_ff19_49b4_9c04_235a252cda6c.slice - libcontainer container kubepods-besteffort-pod3713f976_ff19_49b4_9c04_235a252cda6c.slice. Mar 17 17:25:45.902298 kubelet[3536]: I0317 17:25:45.900662 3536 topology_manager.go:215] "Topology Admit Handler" podUID="0db18af2-8cc1-4e45-bd08-a198372edfbd" podNamespace="kube-system" podName="cilium-79b5d" Mar 17 17:25:45.924949 systemd[1]: Created slice kubepods-burstable-pod0db18af2_8cc1_4e45_bd08_a198372edfbd.slice - libcontainer container kubepods-burstable-pod0db18af2_8cc1_4e45_bd08_a198372edfbd.slice. Mar 17 17:25:45.982436 kubelet[3536]: I0317 17:25:45.982318 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-xtables-lock\") pod \"cilium-79b5d\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " pod="kube-system/cilium-79b5d" Mar 17 17:25:45.982436 kubelet[3536]: I0317 17:25:45.982387 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-cilium-cgroup\") pod \"cilium-79b5d\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " pod="kube-system/cilium-79b5d" Mar 17 17:25:45.982436 kubelet[3536]: I0317 17:25:45.982432 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-cni-path\") pod \"cilium-79b5d\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " pod="kube-system/cilium-79b5d" Mar 17 17:25:45.982436 kubelet[3536]: I0317 17:25:45.982471 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-bpf-maps\") pod \"cilium-79b5d\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " pod="kube-system/cilium-79b5d" Mar 17 17:25:45.982436 kubelet[3536]: I0317 17:25:45.982509 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-hostproc\") pod \"cilium-79b5d\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " pod="kube-system/cilium-79b5d" Mar 17 17:25:45.982436 kubelet[3536]: I0317 17:25:45.982591 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3713f976-ff19-49b4-9c04-235a252cda6c-lib-modules\") pod \"kube-proxy-dkjjv\" (UID: \"3713f976-ff19-49b4-9c04-235a252cda6c\") " pod="kube-system/kube-proxy-dkjjv" Mar 17 17:25:45.983868 kubelet[3536]: I0317 17:25:45.982634 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0db18af2-8cc1-4e45-bd08-a198372edfbd-cilium-config-path\") pod \"cilium-79b5d\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " pod="kube-system/cilium-79b5d" Mar 17 17:25:45.983868 kubelet[3536]: I0317 17:25:45.982674 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3713f976-ff19-49b4-9c04-235a252cda6c-kube-proxy\") pod \"kube-proxy-dkjjv\" (UID: \"3713f976-ff19-49b4-9c04-235a252cda6c\") " pod="kube-system/kube-proxy-dkjjv" Mar 17 17:25:45.983868 kubelet[3536]: I0317 17:25:45.982712 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0db18af2-8cc1-4e45-bd08-a198372edfbd-clustermesh-secrets\") pod \"cilium-79b5d\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " pod="kube-system/cilium-79b5d" Mar 17 17:25:45.983868 kubelet[3536]: I0317 17:25:45.982750 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-etc-cni-netd\") pod \"cilium-79b5d\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " pod="kube-system/cilium-79b5d" Mar 17 17:25:45.983868 kubelet[3536]: I0317 17:25:45.982807 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-host-proc-sys-net\") pod \"cilium-79b5d\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " pod="kube-system/cilium-79b5d" Mar 17 17:25:45.984254 kubelet[3536]: I0317 17:25:45.982844 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-host-proc-sys-kernel\") pod \"cilium-79b5d\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " pod="kube-system/cilium-79b5d" Mar 17 17:25:45.984254 kubelet[3536]: I0317 17:25:45.982880 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0db18af2-8cc1-4e45-bd08-a198372edfbd-hubble-tls\") pod \"cilium-79b5d\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " pod="kube-system/cilium-79b5d" Mar 17 17:25:45.984254 kubelet[3536]: I0317 17:25:45.982918 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3713f976-ff19-49b4-9c04-235a252cda6c-xtables-lock\") pod \"kube-proxy-dkjjv\" (UID: \"3713f976-ff19-49b4-9c04-235a252cda6c\") " pod="kube-system/kube-proxy-dkjjv" Mar 17 17:25:45.984254 kubelet[3536]: I0317 17:25:45.982954 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-cilium-run\") pod \"cilium-79b5d\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " pod="kube-system/cilium-79b5d" Mar 17 17:25:45.984254 kubelet[3536]: I0317 17:25:45.982988 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-lib-modules\") pod \"cilium-79b5d\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " pod="kube-system/cilium-79b5d" Mar 17 17:25:45.984515 kubelet[3536]: I0317 17:25:45.983078 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5grs\" (UniqueName: \"kubernetes.io/projected/3713f976-ff19-49b4-9c04-235a252cda6c-kube-api-access-l5grs\") pod \"kube-proxy-dkjjv\" (UID: \"3713f976-ff19-49b4-9c04-235a252cda6c\") " pod="kube-system/kube-proxy-dkjjv" Mar 17 17:25:45.984515 kubelet[3536]: I0317 17:25:45.983136 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8trb\" (UniqueName: \"kubernetes.io/projected/0db18af2-8cc1-4e45-bd08-a198372edfbd-kube-api-access-l8trb\") pod \"cilium-79b5d\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " pod="kube-system/cilium-79b5d" Mar 17 17:25:46.138103 kubelet[3536]: I0317 17:25:46.137035 3536 topology_manager.go:215] "Topology Admit Handler" podUID="42037dc4-9081-4ced-b1a4-89648b0207b2" podNamespace="kube-system" podName="cilium-operator-599987898-s8jgm" Mar 17 17:25:46.154515 systemd[1]: Created slice kubepods-besteffort-pod42037dc4_9081_4ced_b1a4_89648b0207b2.slice - libcontainer container kubepods-besteffort-pod42037dc4_9081_4ced_b1a4_89648b0207b2.slice. Mar 17 17:25:46.184599 kubelet[3536]: I0317 17:25:46.184546 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42037dc4-9081-4ced-b1a4-89648b0207b2-cilium-config-path\") pod \"cilium-operator-599987898-s8jgm\" (UID: \"42037dc4-9081-4ced-b1a4-89648b0207b2\") " pod="kube-system/cilium-operator-599987898-s8jgm" Mar 17 17:25:46.184867 kubelet[3536]: I0317 17:25:46.184822 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn95g\" (UniqueName: \"kubernetes.io/projected/42037dc4-9081-4ced-b1a4-89648b0207b2-kube-api-access-mn95g\") pod \"cilium-operator-599987898-s8jgm\" (UID: \"42037dc4-9081-4ced-b1a4-89648b0207b2\") " pod="kube-system/cilium-operator-599987898-s8jgm" Mar 17 17:25:46.213303 containerd[1939]: time="2025-03-17T17:25:46.213234656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dkjjv,Uid:3713f976-ff19-49b4-9c04-235a252cda6c,Namespace:kube-system,Attempt:0,}" Mar 17 17:25:46.240098 containerd[1939]: time="2025-03-17T17:25:46.237010052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-79b5d,Uid:0db18af2-8cc1-4e45-bd08-a198372edfbd,Namespace:kube-system,Attempt:0,}" Mar 17 17:25:46.308335 containerd[1939]: time="2025-03-17T17:25:46.307752309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:25:46.308335 containerd[1939]: time="2025-03-17T17:25:46.307846209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:25:46.308335 containerd[1939]: time="2025-03-17T17:25:46.307883121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:46.311854 containerd[1939]: time="2025-03-17T17:25:46.311697105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:46.322104 containerd[1939]: time="2025-03-17T17:25:46.321364785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:25:46.322104 containerd[1939]: time="2025-03-17T17:25:46.321481365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:25:46.322104 containerd[1939]: time="2025-03-17T17:25:46.321519177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:46.323427 containerd[1939]: time="2025-03-17T17:25:46.323112861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:46.353342 systemd[1]: Started cri-containerd-312d510b96dbcabdafeb7f6adb4c9a1e9c55ae1b1f8d89b1a8e938d449403a78.scope - libcontainer container 312d510b96dbcabdafeb7f6adb4c9a1e9c55ae1b1f8d89b1a8e938d449403a78. Mar 17 17:25:46.367288 systemd[1]: Started cri-containerd-decac047572964016723050c9ae38b0be85a42d18495783e185db2080885c35d.scope - libcontainer container decac047572964016723050c9ae38b0be85a42d18495783e185db2080885c35d. Mar 17 17:25:46.420528 containerd[1939]: time="2025-03-17T17:25:46.420314445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dkjjv,Uid:3713f976-ff19-49b4-9c04-235a252cda6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"312d510b96dbcabdafeb7f6adb4c9a1e9c55ae1b1f8d89b1a8e938d449403a78\"" Mar 17 17:25:46.434219 containerd[1939]: time="2025-03-17T17:25:46.434155341Z" level=info msg="CreateContainer within sandbox \"312d510b96dbcabdafeb7f6adb4c9a1e9c55ae1b1f8d89b1a8e938d449403a78\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:25:46.439059 containerd[1939]: time="2025-03-17T17:25:46.438967401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-79b5d,Uid:0db18af2-8cc1-4e45-bd08-a198372edfbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"decac047572964016723050c9ae38b0be85a42d18495783e185db2080885c35d\"" Mar 17 17:25:46.443169 containerd[1939]: time="2025-03-17T17:25:46.442138281Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 17:25:46.464397 containerd[1939]: time="2025-03-17T17:25:46.464342745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-s8jgm,Uid:42037dc4-9081-4ced-b1a4-89648b0207b2,Namespace:kube-system,Attempt:0,}" Mar 17 17:25:46.480522 containerd[1939]: time="2025-03-17T17:25:46.480445366Z" level=info msg="CreateContainer within sandbox \"312d510b96dbcabdafeb7f6adb4c9a1e9c55ae1b1f8d89b1a8e938d449403a78\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f9b47f801b812aacf940cb6f768f246a6d6c867d32b39afba6bd430a7553d8a7\"" Mar 17 17:25:46.482082 containerd[1939]: time="2025-03-17T17:25:46.481555546Z" level=info msg="StartContainer for \"f9b47f801b812aacf940cb6f768f246a6d6c867d32b39afba6bd430a7553d8a7\"" Mar 17 17:25:46.534185 containerd[1939]: time="2025-03-17T17:25:46.532966006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:25:46.538558 containerd[1939]: time="2025-03-17T17:25:46.533782222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:25:46.538558 containerd[1939]: time="2025-03-17T17:25:46.533849230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:46.538558 containerd[1939]: time="2025-03-17T17:25:46.534077050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:46.540236 systemd[1]: Started cri-containerd-f9b47f801b812aacf940cb6f768f246a6d6c867d32b39afba6bd430a7553d8a7.scope - libcontainer container f9b47f801b812aacf940cb6f768f246a6d6c867d32b39afba6bd430a7553d8a7. Mar 17 17:25:46.576482 systemd[1]: Started cri-containerd-c7e979991cc97fcd75cb0e506c9c5acd52d1b3b5ba050011a64ffd96eebe943b.scope - libcontainer container c7e979991cc97fcd75cb0e506c9c5acd52d1b3b5ba050011a64ffd96eebe943b. Mar 17 17:25:46.627728 containerd[1939]: time="2025-03-17T17:25:46.627647470Z" level=info msg="StartContainer for \"f9b47f801b812aacf940cb6f768f246a6d6c867d32b39afba6bd430a7553d8a7\" returns successfully" Mar 17 17:25:46.678707 containerd[1939]: time="2025-03-17T17:25:46.677709910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-s8jgm,Uid:42037dc4-9081-4ced-b1a4-89648b0207b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7e979991cc97fcd75cb0e506c9c5acd52d1b3b5ba050011a64ffd96eebe943b\"" Mar 17 17:25:47.501512 kubelet[3536]: I0317 17:25:47.500897 3536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dkjjv" podStartSLOduration=2.500877227 podStartE2EDuration="2.500877227s" podCreationTimestamp="2025-03-17 17:25:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:25:47.500478707 +0000 UTC m=+16.417958135" watchObservedRunningTime="2025-03-17 17:25:47.500877227 +0000 UTC m=+16.418356655" Mar 17 17:25:53.962738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount754682086.mount: Deactivated successfully. Mar 17 17:25:56.366132 containerd[1939]: time="2025-03-17T17:25:56.366064183Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:56.367784 containerd[1939]: time="2025-03-17T17:25:56.367712431Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 17 17:25:56.368538 containerd[1939]: time="2025-03-17T17:25:56.368452591Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:56.372511 containerd[1939]: time="2025-03-17T17:25:56.371778151Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.929569598s" Mar 17 17:25:56.372511 containerd[1939]: time="2025-03-17T17:25:56.371840083Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 17:25:56.375322 containerd[1939]: time="2025-03-17T17:25:56.374236867Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 17:25:56.379591 containerd[1939]: time="2025-03-17T17:25:56.379532707Z" level=info msg="CreateContainer within sandbox \"decac047572964016723050c9ae38b0be85a42d18495783e185db2080885c35d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:25:56.401477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2011167083.mount: Deactivated successfully. Mar 17 17:25:56.405474 containerd[1939]: time="2025-03-17T17:25:56.405403795Z" level=info msg="CreateContainer within sandbox \"decac047572964016723050c9ae38b0be85a42d18495783e185db2080885c35d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"160a475775ad411a16503c28382c8f71bab9ba49737a662a4aa653406b906232\"" Mar 17 17:25:56.406594 containerd[1939]: time="2025-03-17T17:25:56.406544059Z" level=info msg="StartContainer for \"160a475775ad411a16503c28382c8f71bab9ba49737a662a4aa653406b906232\"" Mar 17 17:25:56.466330 systemd[1]: Started cri-containerd-160a475775ad411a16503c28382c8f71bab9ba49737a662a4aa653406b906232.scope - libcontainer container 160a475775ad411a16503c28382c8f71bab9ba49737a662a4aa653406b906232. Mar 17 17:25:56.512795 containerd[1939]: time="2025-03-17T17:25:56.512726035Z" level=info msg="StartContainer for \"160a475775ad411a16503c28382c8f71bab9ba49737a662a4aa653406b906232\" returns successfully" Mar 17 17:25:56.539519 systemd[1]: cri-containerd-160a475775ad411a16503c28382c8f71bab9ba49737a662a4aa653406b906232.scope: Deactivated successfully. Mar 17 17:25:57.395109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-160a475775ad411a16503c28382c8f71bab9ba49737a662a4aa653406b906232-rootfs.mount: Deactivated successfully. Mar 17 17:25:57.944000 containerd[1939]: time="2025-03-17T17:25:57.943752082Z" level=info msg="shim disconnected" id=160a475775ad411a16503c28382c8f71bab9ba49737a662a4aa653406b906232 namespace=k8s.io Mar 17 17:25:57.944000 containerd[1939]: time="2025-03-17T17:25:57.943828222Z" level=warning msg="cleaning up after shim disconnected" id=160a475775ad411a16503c28382c8f71bab9ba49737a662a4aa653406b906232 namespace=k8s.io Mar 17 17:25:57.944000 containerd[1939]: time="2025-03-17T17:25:57.943849438Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:25:58.424682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2165962611.mount: Deactivated successfully. Mar 17 17:25:58.547501 containerd[1939]: time="2025-03-17T17:25:58.547109973Z" level=info msg="CreateContainer within sandbox \"decac047572964016723050c9ae38b0be85a42d18495783e185db2080885c35d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:25:58.595655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2654546395.mount: Deactivated successfully. Mar 17 17:25:58.608197 containerd[1939]: time="2025-03-17T17:25:58.608140294Z" level=info msg="CreateContainer within sandbox \"decac047572964016723050c9ae38b0be85a42d18495783e185db2080885c35d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1b79d29e9af06f3e8fd0cf2f90ca6d615aae32cbb15d2e7f6f17316b6dd922a6\"" Mar 17 17:25:58.609495 containerd[1939]: time="2025-03-17T17:25:58.609310090Z" level=info msg="StartContainer for \"1b79d29e9af06f3e8fd0cf2f90ca6d615aae32cbb15d2e7f6f17316b6dd922a6\"" Mar 17 17:25:58.666328 systemd[1]: Started cri-containerd-1b79d29e9af06f3e8fd0cf2f90ca6d615aae32cbb15d2e7f6f17316b6dd922a6.scope - libcontainer container 1b79d29e9af06f3e8fd0cf2f90ca6d615aae32cbb15d2e7f6f17316b6dd922a6. Mar 17 17:25:58.721772 containerd[1939]: time="2025-03-17T17:25:58.721652158Z" level=info msg="StartContainer for \"1b79d29e9af06f3e8fd0cf2f90ca6d615aae32cbb15d2e7f6f17316b6dd922a6\" returns successfully" Mar 17 17:25:58.745389 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:25:58.746163 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:25:58.746302 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:25:58.756512 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:25:58.757868 systemd[1]: cri-containerd-1b79d29e9af06f3e8fd0cf2f90ca6d615aae32cbb15d2e7f6f17316b6dd922a6.scope: Deactivated successfully. Mar 17 17:25:58.822485 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:25:58.850270 containerd[1939]: time="2025-03-17T17:25:58.850126511Z" level=info msg="shim disconnected" id=1b79d29e9af06f3e8fd0cf2f90ca6d615aae32cbb15d2e7f6f17316b6dd922a6 namespace=k8s.io Mar 17 17:25:58.850270 containerd[1939]: time="2025-03-17T17:25:58.850219235Z" level=warning msg="cleaning up after shim disconnected" id=1b79d29e9af06f3e8fd0cf2f90ca6d615aae32cbb15d2e7f6f17316b6dd922a6 namespace=k8s.io Mar 17 17:25:58.850270 containerd[1939]: time="2025-03-17T17:25:58.850240679Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:25:59.389771 containerd[1939]: time="2025-03-17T17:25:59.389495014Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:59.392604 containerd[1939]: time="2025-03-17T17:25:59.392463838Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 17 17:25:59.394563 containerd[1939]: time="2025-03-17T17:25:59.394483726Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:59.398568 containerd[1939]: time="2025-03-17T17:25:59.397538734Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.023237007s" Mar 17 17:25:59.398568 containerd[1939]: time="2025-03-17T17:25:59.397597786Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 17:25:59.402525 containerd[1939]: time="2025-03-17T17:25:59.402408094Z" level=info msg="CreateContainer within sandbox \"c7e979991cc97fcd75cb0e506c9c5acd52d1b3b5ba050011a64ffd96eebe943b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 17:25:59.415288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b79d29e9af06f3e8fd0cf2f90ca6d615aae32cbb15d2e7f6f17316b6dd922a6-rootfs.mount: Deactivated successfully. Mar 17 17:25:59.435745 containerd[1939]: time="2025-03-17T17:25:59.435682942Z" level=info msg="CreateContainer within sandbox \"c7e979991cc97fcd75cb0e506c9c5acd52d1b3b5ba050011a64ffd96eebe943b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d\"" Mar 17 17:25:59.436443 containerd[1939]: time="2025-03-17T17:25:59.436395838Z" level=info msg="StartContainer for \"6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d\"" Mar 17 17:25:59.491343 systemd[1]: Started cri-containerd-6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d.scope - libcontainer container 6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d. Mar 17 17:25:59.539962 containerd[1939]: time="2025-03-17T17:25:59.539715250Z" level=info msg="StartContainer for \"6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d\" returns successfully" Mar 17 17:25:59.573792 containerd[1939]: time="2025-03-17T17:25:59.573567911Z" level=info msg="CreateContainer within sandbox \"decac047572964016723050c9ae38b0be85a42d18495783e185db2080885c35d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:25:59.623409 containerd[1939]: time="2025-03-17T17:25:59.623316743Z" level=info msg="CreateContainer within sandbox \"decac047572964016723050c9ae38b0be85a42d18495783e185db2080885c35d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"99e0629ba2406a32c324300287f9fd0ac446489a40a54d005178f53bbaafac01\"" Mar 17 17:25:59.627071 containerd[1939]: time="2025-03-17T17:25:59.625230695Z" level=info msg="StartContainer for \"99e0629ba2406a32c324300287f9fd0ac446489a40a54d005178f53bbaafac01\"" Mar 17 17:25:59.627714 kubelet[3536]: I0317 17:25:59.627454 3536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-s8jgm" podStartSLOduration=0.911136864 podStartE2EDuration="13.627427907s" podCreationTimestamp="2025-03-17 17:25:46 +0000 UTC" firstStartedPulling="2025-03-17 17:25:46.683589947 +0000 UTC m=+15.601069363" lastFinishedPulling="2025-03-17 17:25:59.39988099 +0000 UTC m=+28.317360406" observedRunningTime="2025-03-17 17:25:59.578971451 +0000 UTC m=+28.496450903" watchObservedRunningTime="2025-03-17 17:25:59.627427907 +0000 UTC m=+28.544907335" Mar 17 17:25:59.690365 systemd[1]: Started cri-containerd-99e0629ba2406a32c324300287f9fd0ac446489a40a54d005178f53bbaafac01.scope - libcontainer container 99e0629ba2406a32c324300287f9fd0ac446489a40a54d005178f53bbaafac01. Mar 17 17:25:59.769986 containerd[1939]: time="2025-03-17T17:25:59.769926084Z" level=info msg="StartContainer for \"99e0629ba2406a32c324300287f9fd0ac446489a40a54d005178f53bbaafac01\" returns successfully" Mar 17 17:25:59.778155 systemd[1]: cri-containerd-99e0629ba2406a32c324300287f9fd0ac446489a40a54d005178f53bbaafac01.scope: Deactivated successfully. Mar 17 17:25:59.952061 containerd[1939]: time="2025-03-17T17:25:59.951824304Z" level=info msg="shim disconnected" id=99e0629ba2406a32c324300287f9fd0ac446489a40a54d005178f53bbaafac01 namespace=k8s.io Mar 17 17:25:59.952061 containerd[1939]: time="2025-03-17T17:25:59.951903720Z" level=warning msg="cleaning up after shim disconnected" id=99e0629ba2406a32c324300287f9fd0ac446489a40a54d005178f53bbaafac01 namespace=k8s.io Mar 17 17:25:59.952061 containerd[1939]: time="2025-03-17T17:25:59.951927480Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:26:00.578714 containerd[1939]: time="2025-03-17T17:26:00.578473776Z" level=info msg="CreateContainer within sandbox \"decac047572964016723050c9ae38b0be85a42d18495783e185db2080885c35d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:26:00.601915 containerd[1939]: time="2025-03-17T17:26:00.600600996Z" level=info msg="CreateContainer within sandbox \"decac047572964016723050c9ae38b0be85a42d18495783e185db2080885c35d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f25e25caa3670ee2e786215dd70fda400ac977fae0aa2da4ee026b1cf86e1a76\"" Mar 17 17:26:00.607818 containerd[1939]: time="2025-03-17T17:26:00.604296768Z" level=info msg="StartContainer for \"f25e25caa3670ee2e786215dd70fda400ac977fae0aa2da4ee026b1cf86e1a76\"" Mar 17 17:26:00.609176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3745871157.mount: Deactivated successfully. Mar 17 17:26:00.690581 systemd[1]: Started cri-containerd-f25e25caa3670ee2e786215dd70fda400ac977fae0aa2da4ee026b1cf86e1a76.scope - libcontainer container f25e25caa3670ee2e786215dd70fda400ac977fae0aa2da4ee026b1cf86e1a76. Mar 17 17:26:00.812416 containerd[1939]: time="2025-03-17T17:26:00.812210101Z" level=info msg="StartContainer for \"f25e25caa3670ee2e786215dd70fda400ac977fae0aa2da4ee026b1cf86e1a76\" returns successfully" Mar 17 17:26:00.814009 systemd[1]: cri-containerd-f25e25caa3670ee2e786215dd70fda400ac977fae0aa2da4ee026b1cf86e1a76.scope: Deactivated successfully. Mar 17 17:26:00.891762 containerd[1939]: time="2025-03-17T17:26:00.891184261Z" level=info msg="shim disconnected" id=f25e25caa3670ee2e786215dd70fda400ac977fae0aa2da4ee026b1cf86e1a76 namespace=k8s.io Mar 17 17:26:00.893427 containerd[1939]: time="2025-03-17T17:26:00.891516889Z" level=warning msg="cleaning up after shim disconnected" id=f25e25caa3670ee2e786215dd70fda400ac977fae0aa2da4ee026b1cf86e1a76 namespace=k8s.io Mar 17 17:26:00.893427 containerd[1939]: time="2025-03-17T17:26:00.893032525Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:26:01.414259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f25e25caa3670ee2e786215dd70fda400ac977fae0aa2da4ee026b1cf86e1a76-rootfs.mount: Deactivated successfully. Mar 17 17:26:01.590885 containerd[1939]: time="2025-03-17T17:26:01.590775133Z" level=info msg="CreateContainer within sandbox \"decac047572964016723050c9ae38b0be85a42d18495783e185db2080885c35d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:26:01.647809 containerd[1939]: time="2025-03-17T17:26:01.647738425Z" level=info msg="CreateContainer within sandbox \"decac047572964016723050c9ae38b0be85a42d18495783e185db2080885c35d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8\"" Mar 17 17:26:01.651517 containerd[1939]: time="2025-03-17T17:26:01.651425125Z" level=info msg="StartContainer for \"eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8\"" Mar 17 17:26:01.718348 systemd[1]: Started cri-containerd-eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8.scope - libcontainer container eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8. Mar 17 17:26:01.810684 containerd[1939]: time="2025-03-17T17:26:01.810611774Z" level=info msg="StartContainer for \"eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8\" returns successfully" Mar 17 17:26:02.128302 kubelet[3536]: I0317 17:26:02.128138 3536 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 17:26:02.177908 kubelet[3536]: I0317 17:26:02.177424 3536 topology_manager.go:215] "Topology Admit Handler" podUID="7edb5304-49a5-461b-9338-b1258121c959" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wgvtm" Mar 17 17:26:02.182995 kubelet[3536]: I0317 17:26:02.182853 3536 topology_manager.go:215] "Topology Admit Handler" podUID="919b2f40-3627-4cbb-8cc9-8c7357e46b64" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qrrsj" Mar 17 17:26:02.197411 systemd[1]: Created slice kubepods-burstable-pod7edb5304_49a5_461b_9338_b1258121c959.slice - libcontainer container kubepods-burstable-pod7edb5304_49a5_461b_9338_b1258121c959.slice. Mar 17 17:26:02.204873 kubelet[3536]: I0317 17:26:02.204817 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7edb5304-49a5-461b-9338-b1258121c959-config-volume\") pod \"coredns-7db6d8ff4d-wgvtm\" (UID: \"7edb5304-49a5-461b-9338-b1258121c959\") " pod="kube-system/coredns-7db6d8ff4d-wgvtm" Mar 17 17:26:02.205197 kubelet[3536]: I0317 17:26:02.204881 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/919b2f40-3627-4cbb-8cc9-8c7357e46b64-config-volume\") pod \"coredns-7db6d8ff4d-qrrsj\" (UID: \"919b2f40-3627-4cbb-8cc9-8c7357e46b64\") " pod="kube-system/coredns-7db6d8ff4d-qrrsj" Mar 17 17:26:02.205197 kubelet[3536]: I0317 17:26:02.204936 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46lpw\" (UniqueName: \"kubernetes.io/projected/919b2f40-3627-4cbb-8cc9-8c7357e46b64-kube-api-access-46lpw\") pod \"coredns-7db6d8ff4d-qrrsj\" (UID: \"919b2f40-3627-4cbb-8cc9-8c7357e46b64\") " pod="kube-system/coredns-7db6d8ff4d-qrrsj" Mar 17 17:26:02.205197 kubelet[3536]: I0317 17:26:02.204976 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jtsg\" (UniqueName: \"kubernetes.io/projected/7edb5304-49a5-461b-9338-b1258121c959-kube-api-access-2jtsg\") pod \"coredns-7db6d8ff4d-wgvtm\" (UID: \"7edb5304-49a5-461b-9338-b1258121c959\") " pod="kube-system/coredns-7db6d8ff4d-wgvtm" Mar 17 17:26:02.215674 systemd[1]: Created slice kubepods-burstable-pod919b2f40_3627_4cbb_8cc9_8c7357e46b64.slice - libcontainer container kubepods-burstable-pod919b2f40_3627_4cbb_8cc9_8c7357e46b64.slice. Mar 17 17:26:02.509304 containerd[1939]: time="2025-03-17T17:26:02.508447837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wgvtm,Uid:7edb5304-49a5-461b-9338-b1258121c959,Namespace:kube-system,Attempt:0,}" Mar 17 17:26:02.526975 containerd[1939]: time="2025-03-17T17:26:02.526905541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qrrsj,Uid:919b2f40-3627-4cbb-8cc9-8c7357e46b64,Namespace:kube-system,Attempt:0,}" Mar 17 17:26:04.835996 systemd-networkd[1850]: cilium_host: Link UP Mar 17 17:26:04.836768 (udev-worker)[4331]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:26:04.837053 (udev-worker)[4333]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:26:04.839658 systemd-networkd[1850]: cilium_net: Link UP Mar 17 17:26:04.842224 systemd-networkd[1850]: cilium_net: Gained carrier Mar 17 17:26:04.842980 systemd-networkd[1850]: cilium_host: Gained carrier Mar 17 17:26:05.024898 (udev-worker)[4371]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:26:05.036416 systemd-networkd[1850]: cilium_vxlan: Link UP Mar 17 17:26:05.036432 systemd-networkd[1850]: cilium_vxlan: Gained carrier Mar 17 17:26:05.060335 systemd-networkd[1850]: cilium_host: Gained IPv6LL Mar 17 17:26:05.292329 systemd-networkd[1850]: cilium_net: Gained IPv6LL Mar 17 17:26:05.521070 kernel: NET: Registered PF_ALG protocol family Mar 17 17:26:06.820523 systemd-networkd[1850]: cilium_vxlan: Gained IPv6LL Mar 17 17:26:06.851355 (udev-worker)[4370]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:26:06.855247 systemd-networkd[1850]: lxc_health: Link UP Mar 17 17:26:06.874783 systemd-networkd[1850]: lxc_health: Gained carrier Mar 17 17:26:07.165424 systemd-networkd[1850]: lxc6c9aa426b15c: Link UP Mar 17 17:26:07.177153 kernel: eth0: renamed from tmp3d2ef Mar 17 17:26:07.184003 systemd-networkd[1850]: lxc6c9aa426b15c: Gained carrier Mar 17 17:26:07.204880 systemd-networkd[1850]: lxc3f42e2b2a192: Link UP Mar 17 17:26:07.213131 kernel: eth0: renamed from tmp49f4b Mar 17 17:26:07.222198 systemd-networkd[1850]: lxc3f42e2b2a192: Gained carrier Mar 17 17:26:08.270156 kubelet[3536]: I0317 17:26:08.269949 3536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-79b5d" podStartSLOduration=13.337723952 podStartE2EDuration="23.26992509s" podCreationTimestamp="2025-03-17 17:25:45 +0000 UTC" firstStartedPulling="2025-03-17 17:25:46.441286581 +0000 UTC m=+15.358765997" lastFinishedPulling="2025-03-17 17:25:56.373487623 +0000 UTC m=+25.290967135" observedRunningTime="2025-03-17 17:26:02.685133954 +0000 UTC m=+31.602613406" watchObservedRunningTime="2025-03-17 17:26:08.26992509 +0000 UTC m=+37.187404506" Mar 17 17:26:08.422303 systemd-networkd[1850]: lxc_health: Gained IPv6LL Mar 17 17:26:08.548293 systemd-networkd[1850]: lxc3f42e2b2a192: Gained IPv6LL Mar 17 17:26:09.188339 systemd-networkd[1850]: lxc6c9aa426b15c: Gained IPv6LL Mar 17 17:26:11.900906 ntpd[1911]: Listen normally on 8 cilium_host 192.168.0.64:123 Mar 17 17:26:11.901103 ntpd[1911]: Listen normally on 9 cilium_net [fe80::38fa:12ff:fe48:50e6%4]:123 Mar 17 17:26:11.901568 ntpd[1911]: 17 Mar 17:26:11 ntpd[1911]: Listen normally on 8 cilium_host 192.168.0.64:123 Mar 17 17:26:11.901568 ntpd[1911]: 17 Mar 17:26:11 ntpd[1911]: Listen normally on 9 cilium_net [fe80::38fa:12ff:fe48:50e6%4]:123 Mar 17 17:26:11.901568 ntpd[1911]: 17 Mar 17:26:11 ntpd[1911]: Listen normally on 10 cilium_host [fe80::ec6f:9cff:fe6b:acc6%5]:123 Mar 17 17:26:11.901568 ntpd[1911]: 17 Mar 17:26:11 ntpd[1911]: Listen normally on 11 cilium_vxlan [fe80::1876:36ff:feff:28bf%6]:123 Mar 17 17:26:11.901568 ntpd[1911]: 17 Mar 17:26:11 ntpd[1911]: Listen normally on 12 lxc_health [fe80::6837:6bff:feee:d4da%8]:123 Mar 17 17:26:11.901187 ntpd[1911]: Listen normally on 10 cilium_host [fe80::ec6f:9cff:fe6b:acc6%5]:123 Mar 17 17:26:11.901257 ntpd[1911]: Listen normally on 11 cilium_vxlan [fe80::1876:36ff:feff:28bf%6]:123 Mar 17 17:26:11.901327 ntpd[1911]: Listen normally on 12 lxc_health [fe80::6837:6bff:feee:d4da%8]:123 Mar 17 17:26:11.902116 ntpd[1911]: Listen normally on 13 lxc6c9aa426b15c [fe80::6c59:caff:fe2b:1782%10]:123 Mar 17 17:26:11.902213 ntpd[1911]: 17 Mar 17:26:11 ntpd[1911]: Listen normally on 13 lxc6c9aa426b15c [fe80::6c59:caff:fe2b:1782%10]:123 Mar 17 17:26:11.902269 ntpd[1911]: 17 Mar 17:26:11 ntpd[1911]: Listen normally on 14 lxc3f42e2b2a192 [fe80::7ce7:adff:fec0:d63d%12]:123 Mar 17 17:26:11.902224 ntpd[1911]: Listen normally on 14 lxc3f42e2b2a192 [fe80::7ce7:adff:fec0:d63d%12]:123 Mar 17 17:26:15.638226 containerd[1939]: time="2025-03-17T17:26:15.637158974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:26:15.638226 containerd[1939]: time="2025-03-17T17:26:15.637251614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:26:15.638226 containerd[1939]: time="2025-03-17T17:26:15.637277930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:15.638226 containerd[1939]: time="2025-03-17T17:26:15.637418774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:15.691284 systemd[1]: run-containerd-runc-k8s.io-3d2ef9f88e0bc899725124afc2846e4b9a9e9bce67b9ec9000660aa63dc7aaaf-runc.SFw9m2.mount: Deactivated successfully. Mar 17 17:26:15.710379 systemd[1]: Started cri-containerd-3d2ef9f88e0bc899725124afc2846e4b9a9e9bce67b9ec9000660aa63dc7aaaf.scope - libcontainer container 3d2ef9f88e0bc899725124afc2846e4b9a9e9bce67b9ec9000660aa63dc7aaaf. Mar 17 17:26:15.809695 containerd[1939]: time="2025-03-17T17:26:15.809286447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:26:15.809695 containerd[1939]: time="2025-03-17T17:26:15.809394315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:26:15.809695 containerd[1939]: time="2025-03-17T17:26:15.809461035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:15.809976 containerd[1939]: time="2025-03-17T17:26:15.809790735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:15.859792 containerd[1939]: time="2025-03-17T17:26:15.859622763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wgvtm,Uid:7edb5304-49a5-461b-9338-b1258121c959,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d2ef9f88e0bc899725124afc2846e4b9a9e9bce67b9ec9000660aa63dc7aaaf\"" Mar 17 17:26:15.892059 containerd[1939]: time="2025-03-17T17:26:15.891866932Z" level=info msg="CreateContainer within sandbox \"3d2ef9f88e0bc899725124afc2846e4b9a9e9bce67b9ec9000660aa63dc7aaaf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:26:15.892936 systemd[1]: Started cri-containerd-49f4b628073c0ca8b1077d09d0ee74719712bad0511b42c22cdab917ebe2d4de.scope - libcontainer container 49f4b628073c0ca8b1077d09d0ee74719712bad0511b42c22cdab917ebe2d4de. Mar 17 17:26:15.936176 containerd[1939]: time="2025-03-17T17:26:15.936089524Z" level=info msg="CreateContainer within sandbox \"3d2ef9f88e0bc899725124afc2846e4b9a9e9bce67b9ec9000660aa63dc7aaaf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e4bd862c82b24d85e33fc350c67774a8a898afb45a6be2a35edc8096a9af3460\"" Mar 17 17:26:15.937975 containerd[1939]: time="2025-03-17T17:26:15.937897264Z" level=info msg="StartContainer for \"e4bd862c82b24d85e33fc350c67774a8a898afb45a6be2a35edc8096a9af3460\"" Mar 17 17:26:16.016865 systemd[1]: Started cri-containerd-e4bd862c82b24d85e33fc350c67774a8a898afb45a6be2a35edc8096a9af3460.scope - libcontainer container e4bd862c82b24d85e33fc350c67774a8a898afb45a6be2a35edc8096a9af3460. Mar 17 17:26:16.041652 containerd[1939]: time="2025-03-17T17:26:16.041587704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qrrsj,Uid:919b2f40-3627-4cbb-8cc9-8c7357e46b64,Namespace:kube-system,Attempt:0,} returns sandbox id \"49f4b628073c0ca8b1077d09d0ee74719712bad0511b42c22cdab917ebe2d4de\"" Mar 17 17:26:16.052415 containerd[1939]: time="2025-03-17T17:26:16.052318644Z" level=info msg="CreateContainer within sandbox \"49f4b628073c0ca8b1077d09d0ee74719712bad0511b42c22cdab917ebe2d4de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:26:16.084793 containerd[1939]: time="2025-03-17T17:26:16.084100861Z" level=info msg="CreateContainer within sandbox \"49f4b628073c0ca8b1077d09d0ee74719712bad0511b42c22cdab917ebe2d4de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d8cecd26b368ba7154d044e42424c29ad90f7495a8383f3cb02d25c36d065378\"" Mar 17 17:26:16.087964 containerd[1939]: time="2025-03-17T17:26:16.085765765Z" level=info msg="StartContainer for \"d8cecd26b368ba7154d044e42424c29ad90f7495a8383f3cb02d25c36d065378\"" Mar 17 17:26:16.129048 containerd[1939]: time="2025-03-17T17:26:16.128824213Z" level=info msg="StartContainer for \"e4bd862c82b24d85e33fc350c67774a8a898afb45a6be2a35edc8096a9af3460\" returns successfully" Mar 17 17:26:16.178749 systemd[1]: Started cri-containerd-d8cecd26b368ba7154d044e42424c29ad90f7495a8383f3cb02d25c36d065378.scope - libcontainer container d8cecd26b368ba7154d044e42424c29ad90f7495a8383f3cb02d25c36d065378. Mar 17 17:26:16.259050 containerd[1939]: time="2025-03-17T17:26:16.258896173Z" level=info msg="StartContainer for \"d8cecd26b368ba7154d044e42424c29ad90f7495a8383f3cb02d25c36d065378\" returns successfully" Mar 17 17:26:16.648577 systemd[1]: run-containerd-runc-k8s.io-49f4b628073c0ca8b1077d09d0ee74719712bad0511b42c22cdab917ebe2d4de-runc.HycMTf.mount: Deactivated successfully. Mar 17 17:26:16.700331 kubelet[3536]: I0317 17:26:16.700110 3536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qrrsj" podStartSLOduration=30.700086364 podStartE2EDuration="30.700086364s" podCreationTimestamp="2025-03-17 17:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:26:16.699749032 +0000 UTC m=+45.617228472" watchObservedRunningTime="2025-03-17 17:26:16.700086364 +0000 UTC m=+45.617565792" Mar 17 17:26:16.726607 kubelet[3536]: I0317 17:26:16.723933 3536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wgvtm" podStartSLOduration=30.723911152 podStartE2EDuration="30.723911152s" podCreationTimestamp="2025-03-17 17:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:26:16.72221728 +0000 UTC m=+45.639696720" watchObservedRunningTime="2025-03-17 17:26:16.723911152 +0000 UTC m=+45.641390568" Mar 17 17:26:16.819352 systemd[1]: Started sshd@9-172.31.30.87:22-139.178.68.195:54750.service - OpenSSH per-connection server daemon (139.178.68.195:54750). Mar 17 17:26:17.011919 sshd[4907]: Accepted publickey for core from 139.178.68.195 port 54750 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:17.014514 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:17.022356 systemd-logind[1916]: New session 10 of user core. Mar 17 17:26:17.029301 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:26:17.295544 sshd[4911]: Connection closed by 139.178.68.195 port 54750 Mar 17 17:26:17.296417 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:17.301365 systemd[1]: sshd@9-172.31.30.87:22-139.178.68.195:54750.service: Deactivated successfully. Mar 17 17:26:17.306345 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:26:17.311489 systemd-logind[1916]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:26:17.313873 systemd-logind[1916]: Removed session 10. Mar 17 17:26:22.337592 systemd[1]: Started sshd@10-172.31.30.87:22-139.178.68.195:54760.service - OpenSSH per-connection server daemon (139.178.68.195:54760). Mar 17 17:26:22.522158 sshd[4923]: Accepted publickey for core from 139.178.68.195 port 54760 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:22.523827 sshd-session[4923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:22.532974 systemd-logind[1916]: New session 11 of user core. Mar 17 17:26:22.542260 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:26:22.799077 sshd[4925]: Connection closed by 139.178.68.195 port 54760 Mar 17 17:26:22.799909 sshd-session[4923]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:22.805202 systemd-logind[1916]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:26:22.806510 systemd[1]: sshd@10-172.31.30.87:22-139.178.68.195:54760.service: Deactivated successfully. Mar 17 17:26:22.809777 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:26:22.814777 systemd-logind[1916]: Removed session 11. Mar 17 17:26:27.846805 systemd[1]: Started sshd@11-172.31.30.87:22-139.178.68.195:55250.service - OpenSSH per-connection server daemon (139.178.68.195:55250). Mar 17 17:26:28.041556 sshd[4939]: Accepted publickey for core from 139.178.68.195 port 55250 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:28.044240 sshd-session[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:28.052819 systemd-logind[1916]: New session 12 of user core. Mar 17 17:26:28.058266 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:26:28.304066 sshd[4941]: Connection closed by 139.178.68.195 port 55250 Mar 17 17:26:28.304361 sshd-session[4939]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:28.310981 systemd[1]: sshd@11-172.31.30.87:22-139.178.68.195:55250.service: Deactivated successfully. Mar 17 17:26:28.314604 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:26:28.316300 systemd-logind[1916]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:26:28.318574 systemd-logind[1916]: Removed session 12. Mar 17 17:26:33.342575 systemd[1]: Started sshd@12-172.31.30.87:22-139.178.68.195:55266.service - OpenSSH per-connection server daemon (139.178.68.195:55266). Mar 17 17:26:33.522873 sshd[4955]: Accepted publickey for core from 139.178.68.195 port 55266 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:33.525381 sshd-session[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:33.533118 systemd-logind[1916]: New session 13 of user core. Mar 17 17:26:33.540291 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:26:33.786363 sshd[4957]: Connection closed by 139.178.68.195 port 55266 Mar 17 17:26:33.787434 sshd-session[4955]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:33.793099 systemd[1]: sshd@12-172.31.30.87:22-139.178.68.195:55266.service: Deactivated successfully. Mar 17 17:26:33.793239 systemd-logind[1916]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:26:33.796792 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:26:33.801168 systemd-logind[1916]: Removed session 13. Mar 17 17:26:38.833718 systemd[1]: Started sshd@13-172.31.30.87:22-139.178.68.195:59240.service - OpenSSH per-connection server daemon (139.178.68.195:59240). Mar 17 17:26:39.024593 sshd[4969]: Accepted publickey for core from 139.178.68.195 port 59240 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:39.027140 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:39.037474 systemd-logind[1916]: New session 14 of user core. Mar 17 17:26:39.044438 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:26:39.286462 sshd[4971]: Connection closed by 139.178.68.195 port 59240 Mar 17 17:26:39.289310 sshd-session[4969]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:39.295499 systemd[1]: sshd@13-172.31.30.87:22-139.178.68.195:59240.service: Deactivated successfully. Mar 17 17:26:39.298550 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:26:39.300922 systemd-logind[1916]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:26:39.303413 systemd-logind[1916]: Removed session 14. Mar 17 17:26:44.326551 systemd[1]: Started sshd@14-172.31.30.87:22-139.178.68.195:59248.service - OpenSSH per-connection server daemon (139.178.68.195:59248). Mar 17 17:26:44.516461 sshd[4983]: Accepted publickey for core from 139.178.68.195 port 59248 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:44.519078 sshd-session[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:44.527472 systemd-logind[1916]: New session 15 of user core. Mar 17 17:26:44.532334 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:26:44.774634 sshd[4985]: Connection closed by 139.178.68.195 port 59248 Mar 17 17:26:44.774515 sshd-session[4983]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:44.782088 systemd[1]: sshd@14-172.31.30.87:22-139.178.68.195:59248.service: Deactivated successfully. Mar 17 17:26:44.786501 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:26:44.787997 systemd-logind[1916]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:26:44.790082 systemd-logind[1916]: Removed session 15. Mar 17 17:26:44.812634 systemd[1]: Started sshd@15-172.31.30.87:22-139.178.68.195:59258.service - OpenSSH per-connection server daemon (139.178.68.195:59258). Mar 17 17:26:45.004604 sshd[4997]: Accepted publickey for core from 139.178.68.195 port 59258 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:45.007278 sshd-session[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:45.016183 systemd-logind[1916]: New session 16 of user core. Mar 17 17:26:45.023301 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:26:45.347442 sshd[4999]: Connection closed by 139.178.68.195 port 59258 Mar 17 17:26:45.349907 sshd-session[4997]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:45.360240 systemd[1]: sshd@15-172.31.30.87:22-139.178.68.195:59258.service: Deactivated successfully. Mar 17 17:26:45.367805 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:26:45.372700 systemd-logind[1916]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:26:45.403594 systemd[1]: Started sshd@16-172.31.30.87:22-139.178.68.195:59264.service - OpenSSH per-connection server daemon (139.178.68.195:59264). Mar 17 17:26:45.406125 systemd-logind[1916]: Removed session 16. Mar 17 17:26:45.593768 sshd[5007]: Accepted publickey for core from 139.178.68.195 port 59264 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:45.596255 sshd-session[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:45.605372 systemd-logind[1916]: New session 17 of user core. Mar 17 17:26:45.616321 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:26:45.876380 sshd[5009]: Connection closed by 139.178.68.195 port 59264 Mar 17 17:26:45.875121 sshd-session[5007]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:45.880612 systemd-logind[1916]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:26:45.881629 systemd[1]: sshd@16-172.31.30.87:22-139.178.68.195:59264.service: Deactivated successfully. Mar 17 17:26:45.885457 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:26:45.890727 systemd-logind[1916]: Removed session 17. Mar 17 17:26:50.918539 systemd[1]: Started sshd@17-172.31.30.87:22-139.178.68.195:46572.service - OpenSSH per-connection server daemon (139.178.68.195:46572). Mar 17 17:26:51.114991 sshd[5030]: Accepted publickey for core from 139.178.68.195 port 46572 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:51.117799 sshd-session[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:51.126198 systemd-logind[1916]: New session 18 of user core. Mar 17 17:26:51.135288 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:26:51.390090 sshd[5032]: Connection closed by 139.178.68.195 port 46572 Mar 17 17:26:51.390965 sshd-session[5030]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:51.396406 systemd[1]: sshd@17-172.31.30.87:22-139.178.68.195:46572.service: Deactivated successfully. Mar 17 17:26:51.400966 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:26:51.404503 systemd-logind[1916]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:26:51.407003 systemd-logind[1916]: Removed session 18. Mar 17 17:26:56.432546 systemd[1]: Started sshd@18-172.31.30.87:22-139.178.68.195:40432.service - OpenSSH per-connection server daemon (139.178.68.195:40432). Mar 17 17:26:56.616280 sshd[5043]: Accepted publickey for core from 139.178.68.195 port 40432 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:56.618839 sshd-session[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:56.626877 systemd-logind[1916]: New session 19 of user core. Mar 17 17:26:56.638260 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:26:56.894045 sshd[5045]: Connection closed by 139.178.68.195 port 40432 Mar 17 17:26:56.894869 sshd-session[5043]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:56.901206 systemd-logind[1916]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:26:56.902281 systemd[1]: sshd@18-172.31.30.87:22-139.178.68.195:40432.service: Deactivated successfully. Mar 17 17:26:56.906532 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:26:56.909821 systemd-logind[1916]: Removed session 19. Mar 17 17:27:01.936570 systemd[1]: Started sshd@19-172.31.30.87:22-139.178.68.195:40446.service - OpenSSH per-connection server daemon (139.178.68.195:40446). Mar 17 17:27:02.119240 sshd[5056]: Accepted publickey for core from 139.178.68.195 port 40446 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:02.122208 sshd-session[5056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:02.130813 systemd-logind[1916]: New session 20 of user core. Mar 17 17:27:02.135281 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:27:02.382260 sshd[5058]: Connection closed by 139.178.68.195 port 40446 Mar 17 17:27:02.383213 sshd-session[5056]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:02.391317 systemd[1]: sshd@19-172.31.30.87:22-139.178.68.195:40446.service: Deactivated successfully. Mar 17 17:27:02.396755 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:27:02.399710 systemd-logind[1916]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:27:02.402622 systemd-logind[1916]: Removed session 20. Mar 17 17:27:02.422548 systemd[1]: Started sshd@20-172.31.30.87:22-139.178.68.195:40450.service - OpenSSH per-connection server daemon (139.178.68.195:40450). Mar 17 17:27:02.614194 sshd[5068]: Accepted publickey for core from 139.178.68.195 port 40450 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:02.616701 sshd-session[5068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:02.624641 systemd-logind[1916]: New session 21 of user core. Mar 17 17:27:02.634284 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:27:02.952708 sshd[5070]: Connection closed by 139.178.68.195 port 40450 Mar 17 17:27:02.953533 sshd-session[5068]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:02.960205 systemd[1]: sshd@20-172.31.30.87:22-139.178.68.195:40450.service: Deactivated successfully. Mar 17 17:27:02.964650 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:27:02.966269 systemd-logind[1916]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:27:02.969379 systemd-logind[1916]: Removed session 21. Mar 17 17:27:02.991549 systemd[1]: Started sshd@21-172.31.30.87:22-139.178.68.195:40452.service - OpenSSH per-connection server daemon (139.178.68.195:40452). Mar 17 17:27:03.175745 sshd[5079]: Accepted publickey for core from 139.178.68.195 port 40452 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:03.178304 sshd-session[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:03.189341 systemd-logind[1916]: New session 22 of user core. Mar 17 17:27:03.194296 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:27:05.684666 sshd[5081]: Connection closed by 139.178.68.195 port 40452 Mar 17 17:27:05.685525 sshd-session[5079]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:05.696968 systemd[1]: sshd@21-172.31.30.87:22-139.178.68.195:40452.service: Deactivated successfully. Mar 17 17:27:05.702525 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:27:05.712478 systemd-logind[1916]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:27:05.736277 systemd[1]: Started sshd@22-172.31.30.87:22-139.178.68.195:42008.service - OpenSSH per-connection server daemon (139.178.68.195:42008). Mar 17 17:27:05.738649 systemd-logind[1916]: Removed session 22. Mar 17 17:27:05.924189 sshd[5097]: Accepted publickey for core from 139.178.68.195 port 42008 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:05.926864 sshd-session[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:05.934433 systemd-logind[1916]: New session 23 of user core. Mar 17 17:27:05.944284 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:27:06.427414 sshd[5099]: Connection closed by 139.178.68.195 port 42008 Mar 17 17:27:06.427292 sshd-session[5097]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:06.434857 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:27:06.437444 systemd[1]: sshd@22-172.31.30.87:22-139.178.68.195:42008.service: Deactivated successfully. Mar 17 17:27:06.445134 systemd-logind[1916]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:27:06.446771 systemd-logind[1916]: Removed session 23. Mar 17 17:27:06.465745 systemd[1]: Started sshd@23-172.31.30.87:22-139.178.68.195:42020.service - OpenSSH per-connection server daemon (139.178.68.195:42020). Mar 17 17:27:06.653891 sshd[5108]: Accepted publickey for core from 139.178.68.195 port 42020 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:06.656386 sshd-session[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:06.664628 systemd-logind[1916]: New session 24 of user core. Mar 17 17:27:06.671370 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 17:27:06.910888 sshd[5110]: Connection closed by 139.178.68.195 port 42020 Mar 17 17:27:06.910760 sshd-session[5108]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:06.915948 systemd[1]: sshd@23-172.31.30.87:22-139.178.68.195:42020.service: Deactivated successfully. Mar 17 17:27:06.920764 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 17:27:06.922273 systemd-logind[1916]: Session 24 logged out. Waiting for processes to exit. Mar 17 17:27:06.926260 systemd-logind[1916]: Removed session 24. Mar 17 17:27:11.957463 systemd[1]: Started sshd@24-172.31.30.87:22-139.178.68.195:42036.service - OpenSSH per-connection server daemon (139.178.68.195:42036). Mar 17 17:27:12.150087 sshd[5121]: Accepted publickey for core from 139.178.68.195 port 42036 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:12.152598 sshd-session[5121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:12.160390 systemd-logind[1916]: New session 25 of user core. Mar 17 17:27:12.169283 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 17:27:12.410940 sshd[5123]: Connection closed by 139.178.68.195 port 42036 Mar 17 17:27:12.411770 sshd-session[5121]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:12.418577 systemd[1]: sshd@24-172.31.30.87:22-139.178.68.195:42036.service: Deactivated successfully. Mar 17 17:27:12.423407 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 17:27:12.424703 systemd-logind[1916]: Session 25 logged out. Waiting for processes to exit. Mar 17 17:27:12.426814 systemd-logind[1916]: Removed session 25. Mar 17 17:27:17.454579 systemd[1]: Started sshd@25-172.31.30.87:22-139.178.68.195:50034.service - OpenSSH per-connection server daemon (139.178.68.195:50034). Mar 17 17:27:17.634831 sshd[5139]: Accepted publickey for core from 139.178.68.195 port 50034 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:17.637368 sshd-session[5139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:17.645228 systemd-logind[1916]: New session 26 of user core. Mar 17 17:27:17.650301 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 17:27:17.899140 sshd[5141]: Connection closed by 139.178.68.195 port 50034 Mar 17 17:27:17.899931 sshd-session[5139]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:17.905961 systemd[1]: sshd@25-172.31.30.87:22-139.178.68.195:50034.service: Deactivated successfully. Mar 17 17:27:17.910911 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 17:27:17.914571 systemd-logind[1916]: Session 26 logged out. Waiting for processes to exit. Mar 17 17:27:17.916667 systemd-logind[1916]: Removed session 26. Mar 17 17:27:22.944555 systemd[1]: Started sshd@26-172.31.30.87:22-139.178.68.195:50038.service - OpenSSH per-connection server daemon (139.178.68.195:50038). Mar 17 17:27:23.130883 sshd[5153]: Accepted publickey for core from 139.178.68.195 port 50038 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:23.133328 sshd-session[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:23.140720 systemd-logind[1916]: New session 27 of user core. Mar 17 17:27:23.156286 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 17 17:27:23.393956 sshd[5155]: Connection closed by 139.178.68.195 port 50038 Mar 17 17:27:23.395349 sshd-session[5153]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:23.401290 systemd[1]: sshd@26-172.31.30.87:22-139.178.68.195:50038.service: Deactivated successfully. Mar 17 17:27:23.407239 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 17:27:23.408766 systemd-logind[1916]: Session 27 logged out. Waiting for processes to exit. Mar 17 17:27:23.410715 systemd-logind[1916]: Removed session 27. Mar 17 17:27:28.434544 systemd[1]: Started sshd@27-172.31.30.87:22-139.178.68.195:52736.service - OpenSSH per-connection server daemon (139.178.68.195:52736). Mar 17 17:27:28.619770 sshd[5165]: Accepted publickey for core from 139.178.68.195 port 52736 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:28.622540 sshd-session[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:28.629900 systemd-logind[1916]: New session 28 of user core. Mar 17 17:27:28.638288 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 17 17:27:28.874154 sshd[5167]: Connection closed by 139.178.68.195 port 52736 Mar 17 17:27:28.875460 sshd-session[5165]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:28.882197 systemd[1]: sshd@27-172.31.30.87:22-139.178.68.195:52736.service: Deactivated successfully. Mar 17 17:27:28.886271 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 17:27:28.888670 systemd-logind[1916]: Session 28 logged out. Waiting for processes to exit. Mar 17 17:27:28.890390 systemd-logind[1916]: Removed session 28. Mar 17 17:27:28.915612 systemd[1]: Started sshd@28-172.31.30.87:22-139.178.68.195:52748.service - OpenSSH per-connection server daemon (139.178.68.195:52748). Mar 17 17:27:29.105130 sshd[5178]: Accepted publickey for core from 139.178.68.195 port 52748 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:29.107670 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:29.116587 systemd-logind[1916]: New session 29 of user core. Mar 17 17:27:29.125253 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 17 17:27:31.877498 containerd[1939]: time="2025-03-17T17:27:31.876713873Z" level=info msg="StopContainer for \"6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d\" with timeout 30 (s)" Mar 17 17:27:31.888476 containerd[1939]: time="2025-03-17T17:27:31.884167517Z" level=info msg="Stop container \"6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d\" with signal terminated" Mar 17 17:27:31.922594 containerd[1939]: time="2025-03-17T17:27:31.922518197Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:27:31.926437 systemd[1]: cri-containerd-6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d.scope: Deactivated successfully. Mar 17 17:27:31.941627 containerd[1939]: time="2025-03-17T17:27:31.941554769Z" level=info msg="StopContainer for \"eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8\" with timeout 2 (s)" Mar 17 17:27:31.942903 containerd[1939]: time="2025-03-17T17:27:31.942672857Z" level=info msg="Stop container \"eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8\" with signal terminated" Mar 17 17:27:31.959138 systemd-networkd[1850]: lxc_health: Link DOWN Mar 17 17:27:31.959158 systemd-networkd[1850]: lxc_health: Lost carrier Mar 17 17:27:31.988006 systemd[1]: cri-containerd-eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8.scope: Deactivated successfully. Mar 17 17:27:31.988541 systemd[1]: cri-containerd-eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8.scope: Consumed 14.390s CPU time. Mar 17 17:27:32.006888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d-rootfs.mount: Deactivated successfully. Mar 17 17:27:32.032717 containerd[1939]: time="2025-03-17T17:27:32.032182058Z" level=info msg="shim disconnected" id=6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d namespace=k8s.io Mar 17 17:27:32.033382 containerd[1939]: time="2025-03-17T17:27:32.032815886Z" level=warning msg="cleaning up after shim disconnected" id=6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d namespace=k8s.io Mar 17 17:27:32.033382 containerd[1939]: time="2025-03-17T17:27:32.033179042Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:32.066392 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8-rootfs.mount: Deactivated successfully. Mar 17 17:27:32.082658 containerd[1939]: time="2025-03-17T17:27:32.082561130Z" level=info msg="shim disconnected" id=eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8 namespace=k8s.io Mar 17 17:27:32.082658 containerd[1939]: time="2025-03-17T17:27:32.082638218Z" level=warning msg="cleaning up after shim disconnected" id=eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8 namespace=k8s.io Mar 17 17:27:32.082975 containerd[1939]: time="2025-03-17T17:27:32.082664366Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:32.084181 containerd[1939]: time="2025-03-17T17:27:32.083914586Z" level=info msg="StopContainer for \"6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d\" returns successfully" Mar 17 17:27:32.085141 containerd[1939]: time="2025-03-17T17:27:32.085092062Z" level=info msg="StopPodSandbox for \"c7e979991cc97fcd75cb0e506c9c5acd52d1b3b5ba050011a64ffd96eebe943b\"" Mar 17 17:27:32.085286 containerd[1939]: time="2025-03-17T17:27:32.085251326Z" level=info msg="Container to stop \"6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:32.090953 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c7e979991cc97fcd75cb0e506c9c5acd52d1b3b5ba050011a64ffd96eebe943b-shm.mount: Deactivated successfully. Mar 17 17:27:32.102455 systemd[1]: cri-containerd-c7e979991cc97fcd75cb0e506c9c5acd52d1b3b5ba050011a64ffd96eebe943b.scope: Deactivated successfully. Mar 17 17:27:32.122417 containerd[1939]: time="2025-03-17T17:27:32.122345162Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:27:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:27:32.131307 containerd[1939]: time="2025-03-17T17:27:32.130648214Z" level=info msg="StopContainer for \"eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8\" returns successfully" Mar 17 17:27:32.133094 containerd[1939]: time="2025-03-17T17:27:32.132695198Z" level=info msg="StopPodSandbox for \"decac047572964016723050c9ae38b0be85a42d18495783e185db2080885c35d\"" Mar 17 17:27:32.133094 containerd[1939]: time="2025-03-17T17:27:32.132774794Z" level=info msg="Container to stop \"1b79d29e9af06f3e8fd0cf2f90ca6d615aae32cbb15d2e7f6f17316b6dd922a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:32.133094 containerd[1939]: time="2025-03-17T17:27:32.132803150Z" level=info msg="Container to stop \"f25e25caa3670ee2e786215dd70fda400ac977fae0aa2da4ee026b1cf86e1a76\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:32.133094 containerd[1939]: time="2025-03-17T17:27:32.132824258Z" level=info msg="Container to stop \"eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:32.133094 containerd[1939]: time="2025-03-17T17:27:32.132859754Z" level=info msg="Container to stop \"160a475775ad411a16503c28382c8f71bab9ba49737a662a4aa653406b906232\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:32.133094 containerd[1939]: time="2025-03-17T17:27:32.132881894Z" level=info msg="Container to stop \"99e0629ba2406a32c324300287f9fd0ac446489a40a54d005178f53bbaafac01\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:32.150122 systemd[1]: cri-containerd-decac047572964016723050c9ae38b0be85a42d18495783e185db2080885c35d.scope: Deactivated successfully. Mar 17 17:27:32.160765 containerd[1939]: time="2025-03-17T17:27:32.160612874Z" level=info msg="shim disconnected" id=c7e979991cc97fcd75cb0e506c9c5acd52d1b3b5ba050011a64ffd96eebe943b namespace=k8s.io Mar 17 17:27:32.160765 containerd[1939]: time="2025-03-17T17:27:32.160687382Z" level=warning msg="cleaning up after shim disconnected" id=c7e979991cc97fcd75cb0e506c9c5acd52d1b3b5ba050011a64ffd96eebe943b namespace=k8s.io Mar 17 17:27:32.160765 containerd[1939]: time="2025-03-17T17:27:32.160710506Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:32.195050 containerd[1939]: time="2025-03-17T17:27:32.194783223Z" level=info msg="TearDown network for sandbox \"c7e979991cc97fcd75cb0e506c9c5acd52d1b3b5ba050011a64ffd96eebe943b\" successfully" Mar 17 17:27:32.195050 containerd[1939]: time="2025-03-17T17:27:32.194842491Z" level=info msg="StopPodSandbox for \"c7e979991cc97fcd75cb0e506c9c5acd52d1b3b5ba050011a64ffd96eebe943b\" returns successfully" Mar 17 17:27:32.212623 containerd[1939]: time="2025-03-17T17:27:32.212120679Z" level=info msg="shim disconnected" id=decac047572964016723050c9ae38b0be85a42d18495783e185db2080885c35d namespace=k8s.io Mar 17 17:27:32.212623 containerd[1939]: time="2025-03-17T17:27:32.212217687Z" level=warning msg="cleaning up after shim disconnected" id=decac047572964016723050c9ae38b0be85a42d18495783e185db2080885c35d namespace=k8s.io Mar 17 17:27:32.212623 containerd[1939]: time="2025-03-17T17:27:32.212239887Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:32.239465 containerd[1939]: time="2025-03-17T17:27:32.239402763Z" level=info msg="TearDown network for sandbox \"decac047572964016723050c9ae38b0be85a42d18495783e185db2080885c35d\" successfully" Mar 17 17:27:32.239465 containerd[1939]: time="2025-03-17T17:27:32.239453823Z" level=info msg="StopPodSandbox for \"decac047572964016723050c9ae38b0be85a42d18495783e185db2080885c35d\" returns successfully" Mar 17 17:27:32.281692 kubelet[3536]: I0317 17:27:32.279175 3536 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-cni-path\") pod \"0db18af2-8cc1-4e45-bd08-a198372edfbd\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " Mar 17 17:27:32.281692 kubelet[3536]: I0317 17:27:32.279253 3536 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0db18af2-8cc1-4e45-bd08-a198372edfbd-clustermesh-secrets\") pod \"0db18af2-8cc1-4e45-bd08-a198372edfbd\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " Mar 17 17:27:32.281692 kubelet[3536]: I0317 17:27:32.279291 3536 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-xtables-lock\") pod \"0db18af2-8cc1-4e45-bd08-a198372edfbd\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " Mar 17 17:27:32.281692 kubelet[3536]: I0317 17:27:32.279323 3536 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-cilium-cgroup\") pod \"0db18af2-8cc1-4e45-bd08-a198372edfbd\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " Mar 17 17:27:32.281692 kubelet[3536]: I0317 17:27:32.279360 3536 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-cilium-run\") pod \"0db18af2-8cc1-4e45-bd08-a198372edfbd\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " Mar 17 17:27:32.281692 kubelet[3536]: I0317 17:27:32.279393 3536 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-etc-cni-netd\") pod \"0db18af2-8cc1-4e45-bd08-a198372edfbd\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " Mar 17 17:27:32.283554 kubelet[3536]: I0317 17:27:32.279429 3536 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0db18af2-8cc1-4e45-bd08-a198372edfbd-hubble-tls\") pod \"0db18af2-8cc1-4e45-bd08-a198372edfbd\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " Mar 17 17:27:32.283554 kubelet[3536]: I0317 17:27:32.279459 3536 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-hostproc\") pod \"0db18af2-8cc1-4e45-bd08-a198372edfbd\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " Mar 17 17:27:32.283554 kubelet[3536]: I0317 17:27:32.279493 3536 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-host-proc-sys-kernel\") pod \"0db18af2-8cc1-4e45-bd08-a198372edfbd\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " Mar 17 17:27:32.283554 kubelet[3536]: I0317 17:27:32.279527 3536 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-lib-modules\") pod \"0db18af2-8cc1-4e45-bd08-a198372edfbd\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " Mar 17 17:27:32.283554 kubelet[3536]: I0317 17:27:32.279563 3536 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0db18af2-8cc1-4e45-bd08-a198372edfbd-cilium-config-path\") pod \"0db18af2-8cc1-4e45-bd08-a198372edfbd\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " Mar 17 17:27:32.283554 kubelet[3536]: I0317 17:27:32.279597 3536 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-bpf-maps\") pod \"0db18af2-8cc1-4e45-bd08-a198372edfbd\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " Mar 17 17:27:32.283891 kubelet[3536]: I0317 17:27:32.279634 3536 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42037dc4-9081-4ced-b1a4-89648b0207b2-cilium-config-path\") pod \"42037dc4-9081-4ced-b1a4-89648b0207b2\" (UID: \"42037dc4-9081-4ced-b1a4-89648b0207b2\") " Mar 17 17:27:32.283891 kubelet[3536]: I0317 17:27:32.279673 3536 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8trb\" (UniqueName: \"kubernetes.io/projected/0db18af2-8cc1-4e45-bd08-a198372edfbd-kube-api-access-l8trb\") pod \"0db18af2-8cc1-4e45-bd08-a198372edfbd\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " Mar 17 17:27:32.283891 kubelet[3536]: I0317 17:27:32.279712 3536 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn95g\" (UniqueName: \"kubernetes.io/projected/42037dc4-9081-4ced-b1a4-89648b0207b2-kube-api-access-mn95g\") pod \"42037dc4-9081-4ced-b1a4-89648b0207b2\" (UID: \"42037dc4-9081-4ced-b1a4-89648b0207b2\") " Mar 17 17:27:32.283891 kubelet[3536]: I0317 17:27:32.279753 3536 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-host-proc-sys-net\") pod \"0db18af2-8cc1-4e45-bd08-a198372edfbd\" (UID: \"0db18af2-8cc1-4e45-bd08-a198372edfbd\") " Mar 17 17:27:32.283891 kubelet[3536]: I0317 17:27:32.279864 3536 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0db18af2-8cc1-4e45-bd08-a198372edfbd" (UID: "0db18af2-8cc1-4e45-bd08-a198372edfbd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:27:32.284753 kubelet[3536]: I0317 17:27:32.279926 3536 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-cni-path" (OuterVolumeSpecName: "cni-path") pod "0db18af2-8cc1-4e45-bd08-a198372edfbd" (UID: "0db18af2-8cc1-4e45-bd08-a198372edfbd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:27:32.285680 kubelet[3536]: I0317 17:27:32.285099 3536 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0db18af2-8cc1-4e45-bd08-a198372edfbd" (UID: "0db18af2-8cc1-4e45-bd08-a198372edfbd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:27:32.285680 kubelet[3536]: I0317 17:27:32.285189 3536 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0db18af2-8cc1-4e45-bd08-a198372edfbd" (UID: "0db18af2-8cc1-4e45-bd08-a198372edfbd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:27:32.285680 kubelet[3536]: I0317 17:27:32.285241 3536 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0db18af2-8cc1-4e45-bd08-a198372edfbd" (UID: "0db18af2-8cc1-4e45-bd08-a198372edfbd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:27:32.285680 kubelet[3536]: I0317 17:27:32.285299 3536 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0db18af2-8cc1-4e45-bd08-a198372edfbd" (UID: "0db18af2-8cc1-4e45-bd08-a198372edfbd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:27:32.286358 kubelet[3536]: I0317 17:27:32.286285 3536 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-hostproc" (OuterVolumeSpecName: "hostproc") pod "0db18af2-8cc1-4e45-bd08-a198372edfbd" (UID: "0db18af2-8cc1-4e45-bd08-a198372edfbd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:27:32.286875 kubelet[3536]: I0317 17:27:32.286725 3536 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0db18af2-8cc1-4e45-bd08-a198372edfbd" (UID: "0db18af2-8cc1-4e45-bd08-a198372edfbd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:27:32.287101 kubelet[3536]: I0317 17:27:32.286975 3536 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0db18af2-8cc1-4e45-bd08-a198372edfbd" (UID: "0db18af2-8cc1-4e45-bd08-a198372edfbd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:27:32.287499 kubelet[3536]: I0317 17:27:32.287349 3536 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0db18af2-8cc1-4e45-bd08-a198372edfbd" (UID: "0db18af2-8cc1-4e45-bd08-a198372edfbd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:27:32.287768 kubelet[3536]: I0317 17:27:32.287674 3536 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db18af2-8cc1-4e45-bd08-a198372edfbd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0db18af2-8cc1-4e45-bd08-a198372edfbd" (UID: "0db18af2-8cc1-4e45-bd08-a198372edfbd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 17:27:32.297996 kubelet[3536]: I0317 17:27:32.297883 3536 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0db18af2-8cc1-4e45-bd08-a198372edfbd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0db18af2-8cc1-4e45-bd08-a198372edfbd" (UID: "0db18af2-8cc1-4e45-bd08-a198372edfbd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:27:32.300769 kubelet[3536]: I0317 17:27:32.300662 3536 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42037dc4-9081-4ced-b1a4-89648b0207b2-kube-api-access-mn95g" (OuterVolumeSpecName: "kube-api-access-mn95g") pod "42037dc4-9081-4ced-b1a4-89648b0207b2" (UID: "42037dc4-9081-4ced-b1a4-89648b0207b2"). InnerVolumeSpecName "kube-api-access-mn95g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:27:32.301975 kubelet[3536]: I0317 17:27:32.301866 3536 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0db18af2-8cc1-4e45-bd08-a198372edfbd-kube-api-access-l8trb" (OuterVolumeSpecName: "kube-api-access-l8trb") pod "0db18af2-8cc1-4e45-bd08-a198372edfbd" (UID: "0db18af2-8cc1-4e45-bd08-a198372edfbd"). InnerVolumeSpecName "kube-api-access-l8trb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:27:32.302991 kubelet[3536]: I0317 17:27:32.302936 3536 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42037dc4-9081-4ced-b1a4-89648b0207b2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "42037dc4-9081-4ced-b1a4-89648b0207b2" (UID: "42037dc4-9081-4ced-b1a4-89648b0207b2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:27:32.303370 kubelet[3536]: I0317 17:27:32.303273 3536 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0db18af2-8cc1-4e45-bd08-a198372edfbd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0db18af2-8cc1-4e45-bd08-a198372edfbd" (UID: "0db18af2-8cc1-4e45-bd08-a198372edfbd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:27:32.381663 kubelet[3536]: I0317 17:27:32.380925 3536 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-host-proc-sys-kernel\") on node \"ip-172-31-30-87\" DevicePath \"\"" Mar 17 17:27:32.381663 kubelet[3536]: I0317 17:27:32.380982 3536 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-lib-modules\") on node \"ip-172-31-30-87\" DevicePath \"\"" Mar 17 17:27:32.381663 kubelet[3536]: I0317 17:27:32.381004 3536 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0db18af2-8cc1-4e45-bd08-a198372edfbd-cilium-config-path\") on node \"ip-172-31-30-87\" DevicePath \"\"" Mar 17 17:27:32.381663 kubelet[3536]: I0317 17:27:32.381043 3536 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42037dc4-9081-4ced-b1a4-89648b0207b2-cilium-config-path\") on node \"ip-172-31-30-87\" DevicePath \"\"" Mar 17 17:27:32.381663 kubelet[3536]: I0317 17:27:32.381070 3536 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-bpf-maps\") on node \"ip-172-31-30-87\" DevicePath \"\"" Mar 17 17:27:32.381663 kubelet[3536]: I0317 17:27:32.381091 3536 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-host-proc-sys-net\") on node \"ip-172-31-30-87\" DevicePath \"\"" Mar 17 17:27:32.381663 kubelet[3536]: I0317 17:27:32.381111 3536 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-l8trb\" (UniqueName: \"kubernetes.io/projected/0db18af2-8cc1-4e45-bd08-a198372edfbd-kube-api-access-l8trb\") on node \"ip-172-31-30-87\" DevicePath \"\"" Mar 17 17:27:32.381663 kubelet[3536]: I0317 17:27:32.381131 3536 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mn95g\" (UniqueName: \"kubernetes.io/projected/42037dc4-9081-4ced-b1a4-89648b0207b2-kube-api-access-mn95g\") on node \"ip-172-31-30-87\" DevicePath \"\"" Mar 17 17:27:32.382233 kubelet[3536]: I0317 17:27:32.381152 3536 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0db18af2-8cc1-4e45-bd08-a198372edfbd-clustermesh-secrets\") on node \"ip-172-31-30-87\" DevicePath \"\"" Mar 17 17:27:32.382233 kubelet[3536]: I0317 17:27:32.381172 3536 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-cni-path\") on node \"ip-172-31-30-87\" DevicePath \"\"" Mar 17 17:27:32.382233 kubelet[3536]: I0317 17:27:32.381193 3536 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-cilium-run\") on node \"ip-172-31-30-87\" DevicePath \"\"" Mar 17 17:27:32.382233 kubelet[3536]: I0317 17:27:32.381212 3536 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-xtables-lock\") on node \"ip-172-31-30-87\" DevicePath \"\"" Mar 17 17:27:32.382233 kubelet[3536]: I0317 17:27:32.381231 3536 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-cilium-cgroup\") on node \"ip-172-31-30-87\" DevicePath \"\"" Mar 17 17:27:32.382233 kubelet[3536]: I0317 17:27:32.381250 3536 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0db18af2-8cc1-4e45-bd08-a198372edfbd-hubble-tls\") on node \"ip-172-31-30-87\" DevicePath \"\"" Mar 17 17:27:32.382233 kubelet[3536]: I0317 17:27:32.381268 3536 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-etc-cni-netd\") on node \"ip-172-31-30-87\" DevicePath \"\"" Mar 17 17:27:32.382233 kubelet[3536]: I0317 17:27:32.381287 3536 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0db18af2-8cc1-4e45-bd08-a198372edfbd-hostproc\") on node \"ip-172-31-30-87\" DevicePath \"\"" Mar 17 17:27:32.872773 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7e979991cc97fcd75cb0e506c9c5acd52d1b3b5ba050011a64ffd96eebe943b-rootfs.mount: Deactivated successfully. Mar 17 17:27:32.873219 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-decac047572964016723050c9ae38b0be85a42d18495783e185db2080885c35d-rootfs.mount: Deactivated successfully. Mar 17 17:27:32.873484 systemd[1]: var-lib-kubelet-pods-42037dc4\x2d9081\x2d4ced\x2db1a4\x2d89648b0207b2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmn95g.mount: Deactivated successfully. Mar 17 17:27:32.873766 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-decac047572964016723050c9ae38b0be85a42d18495783e185db2080885c35d-shm.mount: Deactivated successfully. Mar 17 17:27:32.874045 systemd[1]: var-lib-kubelet-pods-0db18af2\x2d8cc1\x2d4e45\x2dbd08\x2da198372edfbd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl8trb.mount: Deactivated successfully. Mar 17 17:27:32.874219 systemd[1]: var-lib-kubelet-pods-0db18af2\x2d8cc1\x2d4e45\x2dbd08\x2da198372edfbd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 17:27:32.874355 systemd[1]: var-lib-kubelet-pods-0db18af2\x2d8cc1\x2d4e45\x2dbd08\x2da198372edfbd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 17:27:32.895296 kubelet[3536]: I0317 17:27:32.895098 3536 scope.go:117] "RemoveContainer" containerID="6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d" Mar 17 17:27:32.900095 containerd[1939]: time="2025-03-17T17:27:32.899561382Z" level=info msg="RemoveContainer for \"6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d\"" Mar 17 17:27:32.910732 systemd[1]: Removed slice kubepods-besteffort-pod42037dc4_9081_4ced_b1a4_89648b0207b2.slice - libcontainer container kubepods-besteffort-pod42037dc4_9081_4ced_b1a4_89648b0207b2.slice. Mar 17 17:27:32.914921 containerd[1939]: time="2025-03-17T17:27:32.913460790Z" level=info msg="RemoveContainer for \"6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d\" returns successfully" Mar 17 17:27:32.916625 kubelet[3536]: I0317 17:27:32.916256 3536 scope.go:117] "RemoveContainer" containerID="6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d" Mar 17 17:27:32.918001 containerd[1939]: time="2025-03-17T17:27:32.917909934Z" level=error msg="ContainerStatus for \"6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d\": not found" Mar 17 17:27:32.921745 kubelet[3536]: E0317 17:27:32.921091 3536 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d\": not found" containerID="6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d" Mar 17 17:27:32.921745 kubelet[3536]: I0317 17:27:32.921172 3536 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d"} err="failed to get container status \"6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d\": rpc error: code = NotFound desc = an error occurred when try to find container \"6eca1675f4a9ac12ef5dec67141723254e00f57733365d8ab7cff73e16a6183d\": not found" Mar 17 17:27:32.921745 kubelet[3536]: I0317 17:27:32.921649 3536 scope.go:117] "RemoveContainer" containerID="eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8" Mar 17 17:27:32.926295 systemd[1]: Removed slice kubepods-burstable-pod0db18af2_8cc1_4e45_bd08_a198372edfbd.slice - libcontainer container kubepods-burstable-pod0db18af2_8cc1_4e45_bd08_a198372edfbd.slice. Mar 17 17:27:32.927131 systemd[1]: kubepods-burstable-pod0db18af2_8cc1_4e45_bd08_a198372edfbd.slice: Consumed 14.546s CPU time. Mar 17 17:27:32.930701 containerd[1939]: time="2025-03-17T17:27:32.930584166Z" level=info msg="RemoveContainer for \"eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8\"" Mar 17 17:27:32.938962 containerd[1939]: time="2025-03-17T17:27:32.938883438Z" level=info msg="RemoveContainer for \"eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8\" returns successfully" Mar 17 17:27:32.939557 kubelet[3536]: I0317 17:27:32.939521 3536 scope.go:117] "RemoveContainer" containerID="f25e25caa3670ee2e786215dd70fda400ac977fae0aa2da4ee026b1cf86e1a76" Mar 17 17:27:32.944222 containerd[1939]: time="2025-03-17T17:27:32.944138550Z" level=info msg="RemoveContainer for \"f25e25caa3670ee2e786215dd70fda400ac977fae0aa2da4ee026b1cf86e1a76\"" Mar 17 17:27:32.952156 containerd[1939]: time="2025-03-17T17:27:32.951970854Z" level=info msg="RemoveContainer for \"f25e25caa3670ee2e786215dd70fda400ac977fae0aa2da4ee026b1cf86e1a76\" returns successfully" Mar 17 17:27:32.953050 kubelet[3536]: I0317 17:27:32.952924 3536 scope.go:117] "RemoveContainer" containerID="99e0629ba2406a32c324300287f9fd0ac446489a40a54d005178f53bbaafac01" Mar 17 17:27:32.956925 containerd[1939]: time="2025-03-17T17:27:32.956860614Z" level=info msg="RemoveContainer for \"99e0629ba2406a32c324300287f9fd0ac446489a40a54d005178f53bbaafac01\"" Mar 17 17:27:32.969466 containerd[1939]: time="2025-03-17T17:27:32.968983818Z" level=info msg="RemoveContainer for \"99e0629ba2406a32c324300287f9fd0ac446489a40a54d005178f53bbaafac01\" returns successfully" Mar 17 17:27:32.970077 kubelet[3536]: I0317 17:27:32.969832 3536 scope.go:117] "RemoveContainer" containerID="1b79d29e9af06f3e8fd0cf2f90ca6d615aae32cbb15d2e7f6f17316b6dd922a6" Mar 17 17:27:32.974759 containerd[1939]: time="2025-03-17T17:27:32.974690982Z" level=info msg="RemoveContainer for \"1b79d29e9af06f3e8fd0cf2f90ca6d615aae32cbb15d2e7f6f17316b6dd922a6\"" Mar 17 17:27:32.988421 containerd[1939]: time="2025-03-17T17:27:32.988367551Z" level=info msg="RemoveContainer for \"1b79d29e9af06f3e8fd0cf2f90ca6d615aae32cbb15d2e7f6f17316b6dd922a6\" returns successfully" Mar 17 17:27:32.989241 kubelet[3536]: I0317 17:27:32.989205 3536 scope.go:117] "RemoveContainer" containerID="160a475775ad411a16503c28382c8f71bab9ba49737a662a4aa653406b906232" Mar 17 17:27:32.991421 containerd[1939]: time="2025-03-17T17:27:32.991376539Z" level=info msg="RemoveContainer for \"160a475775ad411a16503c28382c8f71bab9ba49737a662a4aa653406b906232\"" Mar 17 17:27:33.002175 containerd[1939]: time="2025-03-17T17:27:33.002071611Z" level=info msg="RemoveContainer for \"160a475775ad411a16503c28382c8f71bab9ba49737a662a4aa653406b906232\" returns successfully" Mar 17 17:27:33.002595 kubelet[3536]: I0317 17:27:33.002541 3536 scope.go:117] "RemoveContainer" containerID="eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8" Mar 17 17:27:33.002952 containerd[1939]: time="2025-03-17T17:27:33.002895675Z" level=error msg="ContainerStatus for \"eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8\": not found" Mar 17 17:27:33.003320 kubelet[3536]: E0317 17:27:33.003275 3536 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8\": not found" containerID="eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8" Mar 17 17:27:33.003426 kubelet[3536]: I0317 17:27:33.003330 3536 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8"} err="failed to get container status \"eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"eda28d74899f869e102d6a401ae481f87c8ec73c51f238aa0df5515f81e792e8\": not found" Mar 17 17:27:33.003426 kubelet[3536]: I0317 17:27:33.003372 3536 scope.go:117] "RemoveContainer" containerID="f25e25caa3670ee2e786215dd70fda400ac977fae0aa2da4ee026b1cf86e1a76" Mar 17 17:27:33.003740 containerd[1939]: time="2025-03-17T17:27:33.003685035Z" level=error msg="ContainerStatus for \"f25e25caa3670ee2e786215dd70fda400ac977fae0aa2da4ee026b1cf86e1a76\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f25e25caa3670ee2e786215dd70fda400ac977fae0aa2da4ee026b1cf86e1a76\": not found" Mar 17 17:27:33.003936 kubelet[3536]: E0317 17:27:33.003894 3536 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f25e25caa3670ee2e786215dd70fda400ac977fae0aa2da4ee026b1cf86e1a76\": not found" containerID="f25e25caa3670ee2e786215dd70fda400ac977fae0aa2da4ee026b1cf86e1a76" Mar 17 17:27:33.004044 kubelet[3536]: I0317 17:27:33.003941 3536 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f25e25caa3670ee2e786215dd70fda400ac977fae0aa2da4ee026b1cf86e1a76"} err="failed to get container status \"f25e25caa3670ee2e786215dd70fda400ac977fae0aa2da4ee026b1cf86e1a76\": rpc error: code = NotFound desc = an error occurred when try to find container \"f25e25caa3670ee2e786215dd70fda400ac977fae0aa2da4ee026b1cf86e1a76\": not found" Mar 17 17:27:33.004044 kubelet[3536]: I0317 17:27:33.003972 3536 scope.go:117] "RemoveContainer" containerID="99e0629ba2406a32c324300287f9fd0ac446489a40a54d005178f53bbaafac01" Mar 17 17:27:33.004531 containerd[1939]: time="2025-03-17T17:27:33.004435791Z" level=error msg="ContainerStatus for \"99e0629ba2406a32c324300287f9fd0ac446489a40a54d005178f53bbaafac01\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99e0629ba2406a32c324300287f9fd0ac446489a40a54d005178f53bbaafac01\": not found" Mar 17 17:27:33.004670 kubelet[3536]: E0317 17:27:33.004635 3536 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99e0629ba2406a32c324300287f9fd0ac446489a40a54d005178f53bbaafac01\": not found" containerID="99e0629ba2406a32c324300287f9fd0ac446489a40a54d005178f53bbaafac01" Mar 17 17:27:33.004761 kubelet[3536]: I0317 17:27:33.004676 3536 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99e0629ba2406a32c324300287f9fd0ac446489a40a54d005178f53bbaafac01"} err="failed to get container status \"99e0629ba2406a32c324300287f9fd0ac446489a40a54d005178f53bbaafac01\": rpc error: code = NotFound desc = an error occurred when try to find container \"99e0629ba2406a32c324300287f9fd0ac446489a40a54d005178f53bbaafac01\": not found" Mar 17 17:27:33.004761 kubelet[3536]: I0317 17:27:33.004709 3536 scope.go:117] "RemoveContainer" containerID="1b79d29e9af06f3e8fd0cf2f90ca6d615aae32cbb15d2e7f6f17316b6dd922a6" Mar 17 17:27:33.005208 containerd[1939]: time="2025-03-17T17:27:33.005103411Z" level=error msg="ContainerStatus for \"1b79d29e9af06f3e8fd0cf2f90ca6d615aae32cbb15d2e7f6f17316b6dd922a6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1b79d29e9af06f3e8fd0cf2f90ca6d615aae32cbb15d2e7f6f17316b6dd922a6\": not found" Mar 17 17:27:33.005762 kubelet[3536]: E0317 17:27:33.005724 3536 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1b79d29e9af06f3e8fd0cf2f90ca6d615aae32cbb15d2e7f6f17316b6dd922a6\": not found" containerID="1b79d29e9af06f3e8fd0cf2f90ca6d615aae32cbb15d2e7f6f17316b6dd922a6" Mar 17 17:27:33.005936 kubelet[3536]: I0317 17:27:33.005901 3536 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1b79d29e9af06f3e8fd0cf2f90ca6d615aae32cbb15d2e7f6f17316b6dd922a6"} err="failed to get container status \"1b79d29e9af06f3e8fd0cf2f90ca6d615aae32cbb15d2e7f6f17316b6dd922a6\": rpc error: code = NotFound desc = an error occurred when try to find container \"1b79d29e9af06f3e8fd0cf2f90ca6d615aae32cbb15d2e7f6f17316b6dd922a6\": not found" Mar 17 17:27:33.006084 kubelet[3536]: I0317 17:27:33.006061 3536 scope.go:117] "RemoveContainer" containerID="160a475775ad411a16503c28382c8f71bab9ba49737a662a4aa653406b906232" Mar 17 17:27:33.006698 containerd[1939]: time="2025-03-17T17:27:33.006514143Z" level=error msg="ContainerStatus for \"160a475775ad411a16503c28382c8f71bab9ba49737a662a4aa653406b906232\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"160a475775ad411a16503c28382c8f71bab9ba49737a662a4aa653406b906232\": not found" Mar 17 17:27:33.006800 kubelet[3536]: E0317 17:27:33.006743 3536 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"160a475775ad411a16503c28382c8f71bab9ba49737a662a4aa653406b906232\": not found" containerID="160a475775ad411a16503c28382c8f71bab9ba49737a662a4aa653406b906232" Mar 17 17:27:33.006887 kubelet[3536]: I0317 17:27:33.006789 3536 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"160a475775ad411a16503c28382c8f71bab9ba49737a662a4aa653406b906232"} err="failed to get container status \"160a475775ad411a16503c28382c8f71bab9ba49737a662a4aa653406b906232\": rpc error: code = NotFound desc = an error occurred when try to find container \"160a475775ad411a16503c28382c8f71bab9ba49737a662a4aa653406b906232\": not found" Mar 17 17:27:33.319229 kubelet[3536]: I0317 17:27:33.319181 3536 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0db18af2-8cc1-4e45-bd08-a198372edfbd" path="/var/lib/kubelet/pods/0db18af2-8cc1-4e45-bd08-a198372edfbd/volumes" Mar 17 17:27:33.320673 kubelet[3536]: I0317 17:27:33.320632 3536 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42037dc4-9081-4ced-b1a4-89648b0207b2" path="/var/lib/kubelet/pods/42037dc4-9081-4ced-b1a4-89648b0207b2/volumes" Mar 17 17:27:33.807219 sshd[5180]: Connection closed by 139.178.68.195 port 52748 Mar 17 17:27:33.806335 sshd-session[5178]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:33.812313 systemd-logind[1916]: Session 29 logged out. Waiting for processes to exit. Mar 17 17:27:33.812941 systemd[1]: sshd@28-172.31.30.87:22-139.178.68.195:52748.service: Deactivated successfully. Mar 17 17:27:33.816825 systemd[1]: session-29.scope: Deactivated successfully. Mar 17 17:27:33.817226 systemd[1]: session-29.scope: Consumed 1.996s CPU time. Mar 17 17:27:33.820991 systemd-logind[1916]: Removed session 29. Mar 17 17:27:33.843546 systemd[1]: Started sshd@29-172.31.30.87:22-139.178.68.195:52762.service - OpenSSH per-connection server daemon (139.178.68.195:52762). Mar 17 17:27:34.032460 sshd[5344]: Accepted publickey for core from 139.178.68.195 port 52762 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:34.035409 sshd-session[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:34.043605 systemd-logind[1916]: New session 30 of user core. Mar 17 17:27:34.052304 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 17 17:27:34.900912 ntpd[1911]: Deleting interface #12 lxc_health, fe80::6837:6bff:feee:d4da%8#123, interface stats: received=0, sent=0, dropped=0, active_time=83 secs Mar 17 17:27:34.901436 ntpd[1911]: 17 Mar 17:27:34 ntpd[1911]: Deleting interface #12 lxc_health, fe80::6837:6bff:feee:d4da%8#123, interface stats: received=0, sent=0, dropped=0, active_time=83 secs Mar 17 17:27:35.564061 sshd[5346]: Connection closed by 139.178.68.195 port 52762 Mar 17 17:27:35.564802 sshd-session[5344]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:35.576577 kubelet[3536]: I0317 17:27:35.572587 3536 topology_manager.go:215] "Topology Admit Handler" podUID="64d4d220-df3c-412a-b42e-d1b328472628" podNamespace="kube-system" podName="cilium-rkwc4" Mar 17 17:27:35.576577 kubelet[3536]: E0317 17:27:35.572682 3536 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42037dc4-9081-4ced-b1a4-89648b0207b2" containerName="cilium-operator" Mar 17 17:27:35.576577 kubelet[3536]: E0317 17:27:35.572702 3536 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0db18af2-8cc1-4e45-bd08-a198372edfbd" containerName="cilium-agent" Mar 17 17:27:35.576577 kubelet[3536]: E0317 17:27:35.572718 3536 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0db18af2-8cc1-4e45-bd08-a198372edfbd" containerName="mount-bpf-fs" Mar 17 17:27:35.576577 kubelet[3536]: E0317 17:27:35.572733 3536 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0db18af2-8cc1-4e45-bd08-a198372edfbd" containerName="clean-cilium-state" Mar 17 17:27:35.576577 kubelet[3536]: E0317 17:27:35.572750 3536 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0db18af2-8cc1-4e45-bd08-a198372edfbd" containerName="mount-cgroup" Mar 17 17:27:35.576577 kubelet[3536]: E0317 17:27:35.572766 3536 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0db18af2-8cc1-4e45-bd08-a198372edfbd" containerName="apply-sysctl-overwrites" Mar 17 17:27:35.576577 kubelet[3536]: I0317 17:27:35.572810 3536 memory_manager.go:354] "RemoveStaleState removing state" podUID="42037dc4-9081-4ced-b1a4-89648b0207b2" containerName="cilium-operator" Mar 17 17:27:35.576577 kubelet[3536]: I0317 17:27:35.572825 3536 memory_manager.go:354] "RemoveStaleState removing state" podUID="0db18af2-8cc1-4e45-bd08-a198372edfbd" containerName="cilium-agent" Mar 17 17:27:35.574699 systemd[1]: sshd@29-172.31.30.87:22-139.178.68.195:52762.service: Deactivated successfully. Mar 17 17:27:35.584985 systemd[1]: session-30.scope: Deactivated successfully. Mar 17 17:27:35.585806 systemd[1]: session-30.scope: Consumed 1.318s CPU time. Mar 17 17:27:35.607589 kubelet[3536]: I0317 17:27:35.606280 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64d4d220-df3c-412a-b42e-d1b328472628-xtables-lock\") pod \"cilium-rkwc4\" (UID: \"64d4d220-df3c-412a-b42e-d1b328472628\") " pod="kube-system/cilium-rkwc4" Mar 17 17:27:35.607589 kubelet[3536]: I0317 17:27:35.606364 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/64d4d220-df3c-412a-b42e-d1b328472628-host-proc-sys-kernel\") pod \"cilium-rkwc4\" (UID: \"64d4d220-df3c-412a-b42e-d1b328472628\") " pod="kube-system/cilium-rkwc4" Mar 17 17:27:35.607589 kubelet[3536]: I0317 17:27:35.606413 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/64d4d220-df3c-412a-b42e-d1b328472628-cilium-run\") pod \"cilium-rkwc4\" (UID: \"64d4d220-df3c-412a-b42e-d1b328472628\") " pod="kube-system/cilium-rkwc4" Mar 17 17:27:35.607589 kubelet[3536]: I0317 17:27:35.606449 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/64d4d220-df3c-412a-b42e-d1b328472628-clustermesh-secrets\") pod \"cilium-rkwc4\" (UID: \"64d4d220-df3c-412a-b42e-d1b328472628\") " pod="kube-system/cilium-rkwc4" Mar 17 17:27:35.607589 kubelet[3536]: I0317 17:27:35.606516 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/64d4d220-df3c-412a-b42e-d1b328472628-cilium-ipsec-secrets\") pod \"cilium-rkwc4\" (UID: \"64d4d220-df3c-412a-b42e-d1b328472628\") " pod="kube-system/cilium-rkwc4" Mar 17 17:27:35.607589 kubelet[3536]: I0317 17:27:35.606556 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64d4d220-df3c-412a-b42e-d1b328472628-etc-cni-netd\") pod \"cilium-rkwc4\" (UID: \"64d4d220-df3c-412a-b42e-d1b328472628\") " pod="kube-system/cilium-rkwc4" Mar 17 17:27:35.607982 kubelet[3536]: I0317 17:27:35.606590 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64d4d220-df3c-412a-b42e-d1b328472628-cilium-config-path\") pod \"cilium-rkwc4\" (UID: \"64d4d220-df3c-412a-b42e-d1b328472628\") " pod="kube-system/cilium-rkwc4" Mar 17 17:27:35.607982 kubelet[3536]: I0317 17:27:35.606624 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64d4d220-df3c-412a-b42e-d1b328472628-lib-modules\") pod \"cilium-rkwc4\" (UID: \"64d4d220-df3c-412a-b42e-d1b328472628\") " pod="kube-system/cilium-rkwc4" Mar 17 17:27:35.607982 kubelet[3536]: I0317 17:27:35.606662 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kwnz\" (UniqueName: \"kubernetes.io/projected/64d4d220-df3c-412a-b42e-d1b328472628-kube-api-access-8kwnz\") pod \"cilium-rkwc4\" (UID: \"64d4d220-df3c-412a-b42e-d1b328472628\") " pod="kube-system/cilium-rkwc4" Mar 17 17:27:35.607982 kubelet[3536]: I0317 17:27:35.606699 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/64d4d220-df3c-412a-b42e-d1b328472628-cilium-cgroup\") pod \"cilium-rkwc4\" (UID: \"64d4d220-df3c-412a-b42e-d1b328472628\") " pod="kube-system/cilium-rkwc4" Mar 17 17:27:35.607982 kubelet[3536]: I0317 17:27:35.606735 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/64d4d220-df3c-412a-b42e-d1b328472628-host-proc-sys-net\") pod \"cilium-rkwc4\" (UID: \"64d4d220-df3c-412a-b42e-d1b328472628\") " pod="kube-system/cilium-rkwc4" Mar 17 17:27:35.607982 kubelet[3536]: I0317 17:27:35.606778 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/64d4d220-df3c-412a-b42e-d1b328472628-hostproc\") pod \"cilium-rkwc4\" (UID: \"64d4d220-df3c-412a-b42e-d1b328472628\") " pod="kube-system/cilium-rkwc4" Mar 17 17:27:35.608303 kubelet[3536]: I0317 17:27:35.606815 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/64d4d220-df3c-412a-b42e-d1b328472628-hubble-tls\") pod \"cilium-rkwc4\" (UID: \"64d4d220-df3c-412a-b42e-d1b328472628\") " pod="kube-system/cilium-rkwc4" Mar 17 17:27:35.608303 kubelet[3536]: I0317 17:27:35.606864 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/64d4d220-df3c-412a-b42e-d1b328472628-bpf-maps\") pod \"cilium-rkwc4\" (UID: \"64d4d220-df3c-412a-b42e-d1b328472628\") " pod="kube-system/cilium-rkwc4" Mar 17 17:27:35.608303 kubelet[3536]: I0317 17:27:35.606898 3536 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/64d4d220-df3c-412a-b42e-d1b328472628-cni-path\") pod \"cilium-rkwc4\" (UID: \"64d4d220-df3c-412a-b42e-d1b328472628\") " pod="kube-system/cilium-rkwc4" Mar 17 17:27:35.614555 systemd-logind[1916]: Session 30 logged out. Waiting for processes to exit. Mar 17 17:27:35.628586 systemd[1]: Started sshd@30-172.31.30.87:22-139.178.68.195:52774.service - OpenSSH per-connection server daemon (139.178.68.195:52774). Mar 17 17:27:35.632708 systemd-logind[1916]: Removed session 30. Mar 17 17:27:35.647433 systemd[1]: Created slice kubepods-burstable-pod64d4d220_df3c_412a_b42e_d1b328472628.slice - libcontainer container kubepods-burstable-pod64d4d220_df3c_412a_b42e_d1b328472628.slice. Mar 17 17:27:35.865371 sshd[5355]: Accepted publickey for core from 139.178.68.195 port 52774 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:35.868911 sshd-session[5355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:35.876909 systemd-logind[1916]: New session 31 of user core. Mar 17 17:27:35.882278 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 17 17:27:35.960308 containerd[1939]: time="2025-03-17T17:27:35.960234489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rkwc4,Uid:64d4d220-df3c-412a-b42e-d1b328472628,Namespace:kube-system,Attempt:0,}" Mar 17 17:27:36.005368 containerd[1939]: time="2025-03-17T17:27:36.005193210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:27:36.005368 containerd[1939]: time="2025-03-17T17:27:36.005310762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:27:36.005789 containerd[1939]: time="2025-03-17T17:27:36.005341710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:36.006378 containerd[1939]: time="2025-03-17T17:27:36.006282450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:36.011638 sshd[5361]: Connection closed by 139.178.68.195 port 52774 Mar 17 17:27:36.011487 sshd-session[5355]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:36.030108 systemd-logind[1916]: Session 31 logged out. Waiting for processes to exit. Mar 17 17:27:36.032266 systemd[1]: sshd@30-172.31.30.87:22-139.178.68.195:52774.service: Deactivated successfully. Mar 17 17:27:36.037523 systemd[1]: session-31.scope: Deactivated successfully. Mar 17 17:27:36.054189 systemd-logind[1916]: Removed session 31. Mar 17 17:27:36.060334 systemd[1]: Started cri-containerd-b017372276c63be655403cae60ae67eecc034334e2afb0e5435775b5167622dc.scope - libcontainer container b017372276c63be655403cae60ae67eecc034334e2afb0e5435775b5167622dc. Mar 17 17:27:36.064977 systemd[1]: Started sshd@31-172.31.30.87:22-139.178.68.195:56528.service - OpenSSH per-connection server daemon (139.178.68.195:56528). Mar 17 17:27:36.122597 containerd[1939]: time="2025-03-17T17:27:36.122317110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rkwc4,Uid:64d4d220-df3c-412a-b42e-d1b328472628,Namespace:kube-system,Attempt:0,} returns sandbox id \"b017372276c63be655403cae60ae67eecc034334e2afb0e5435775b5167622dc\"" Mar 17 17:27:36.133179 containerd[1939]: time="2025-03-17T17:27:36.133003218Z" level=info msg="CreateContainer within sandbox \"b017372276c63be655403cae60ae67eecc034334e2afb0e5435775b5167622dc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:27:36.160977 containerd[1939]: time="2025-03-17T17:27:36.160899630Z" level=info msg="CreateContainer within sandbox \"b017372276c63be655403cae60ae67eecc034334e2afb0e5435775b5167622dc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9d386f4d047c45d4922510a75b75912dff54c6666c27af39c2e470330a46b64b\"" Mar 17 17:27:36.162423 containerd[1939]: time="2025-03-17T17:27:36.161996466Z" level=info msg="StartContainer for \"9d386f4d047c45d4922510a75b75912dff54c6666c27af39c2e470330a46b64b\"" Mar 17 17:27:36.206375 systemd[1]: Started cri-containerd-9d386f4d047c45d4922510a75b75912dff54c6666c27af39c2e470330a46b64b.scope - libcontainer container 9d386f4d047c45d4922510a75b75912dff54c6666c27af39c2e470330a46b64b. Mar 17 17:27:36.264991 containerd[1939]: time="2025-03-17T17:27:36.264922519Z" level=info msg="StartContainer for \"9d386f4d047c45d4922510a75b75912dff54c6666c27af39c2e470330a46b64b\" returns successfully" Mar 17 17:27:36.281340 systemd[1]: cri-containerd-9d386f4d047c45d4922510a75b75912dff54c6666c27af39c2e470330a46b64b.scope: Deactivated successfully. Mar 17 17:27:36.289188 sshd[5395]: Accepted publickey for core from 139.178.68.195 port 56528 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:36.293160 sshd-session[5395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:36.305129 systemd-logind[1916]: New session 32 of user core. Mar 17 17:27:36.313490 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 17 17:27:36.349693 containerd[1939]: time="2025-03-17T17:27:36.349585999Z" level=info msg="shim disconnected" id=9d386f4d047c45d4922510a75b75912dff54c6666c27af39c2e470330a46b64b namespace=k8s.io Mar 17 17:27:36.349693 containerd[1939]: time="2025-03-17T17:27:36.349665223Z" level=warning msg="cleaning up after shim disconnected" id=9d386f4d047c45d4922510a75b75912dff54c6666c27af39c2e470330a46b64b namespace=k8s.io Mar 17 17:27:36.349693 containerd[1939]: time="2025-03-17T17:27:36.349685899Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:36.546253 kubelet[3536]: E0317 17:27:36.546125 3536 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:27:36.929341 containerd[1939]: time="2025-03-17T17:27:36.929136694Z" level=info msg="CreateContainer within sandbox \"b017372276c63be655403cae60ae67eecc034334e2afb0e5435775b5167622dc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:27:36.956716 containerd[1939]: time="2025-03-17T17:27:36.956659894Z" level=info msg="CreateContainer within sandbox \"b017372276c63be655403cae60ae67eecc034334e2afb0e5435775b5167622dc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f79763a2bf8374b9372c457c2f1da968eb381339694c27a5ad21c331c18e2164\"" Mar 17 17:27:36.958047 containerd[1939]: time="2025-03-17T17:27:36.957944530Z" level=info msg="StartContainer for \"f79763a2bf8374b9372c457c2f1da968eb381339694c27a5ad21c331c18e2164\"" Mar 17 17:27:37.022082 systemd[1]: Started cri-containerd-f79763a2bf8374b9372c457c2f1da968eb381339694c27a5ad21c331c18e2164.scope - libcontainer container f79763a2bf8374b9372c457c2f1da968eb381339694c27a5ad21c331c18e2164. Mar 17 17:27:37.069449 containerd[1939]: time="2025-03-17T17:27:37.068846431Z" level=info msg="StartContainer for \"f79763a2bf8374b9372c457c2f1da968eb381339694c27a5ad21c331c18e2164\" returns successfully" Mar 17 17:27:37.081954 systemd[1]: cri-containerd-f79763a2bf8374b9372c457c2f1da968eb381339694c27a5ad21c331c18e2164.scope: Deactivated successfully. Mar 17 17:27:37.127506 containerd[1939]: time="2025-03-17T17:27:37.127296955Z" level=info msg="shim disconnected" id=f79763a2bf8374b9372c457c2f1da968eb381339694c27a5ad21c331c18e2164 namespace=k8s.io Mar 17 17:27:37.127506 containerd[1939]: time="2025-03-17T17:27:37.127370479Z" level=warning msg="cleaning up after shim disconnected" id=f79763a2bf8374b9372c457c2f1da968eb381339694c27a5ad21c331c18e2164 namespace=k8s.io Mar 17 17:27:37.127506 containerd[1939]: time="2025-03-17T17:27:37.127390543Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:37.148626 containerd[1939]: time="2025-03-17T17:27:37.148552987Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:27:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:27:37.725512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f79763a2bf8374b9372c457c2f1da968eb381339694c27a5ad21c331c18e2164-rootfs.mount: Deactivated successfully. Mar 17 17:27:37.935372 containerd[1939]: time="2025-03-17T17:27:37.935150723Z" level=info msg="CreateContainer within sandbox \"b017372276c63be655403cae60ae67eecc034334e2afb0e5435775b5167622dc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:27:37.963698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1208115234.mount: Deactivated successfully. Mar 17 17:27:37.967622 containerd[1939]: time="2025-03-17T17:27:37.967566455Z" level=info msg="CreateContainer within sandbox \"b017372276c63be655403cae60ae67eecc034334e2afb0e5435775b5167622dc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6cb73fc1db7d976d0160321882c2d93295bcbe0af50c219afebebe9285a18a43\"" Mar 17 17:27:37.972900 containerd[1939]: time="2025-03-17T17:27:37.972156395Z" level=info msg="StartContainer for \"6cb73fc1db7d976d0160321882c2d93295bcbe0af50c219afebebe9285a18a43\"" Mar 17 17:27:38.030368 systemd[1]: Started cri-containerd-6cb73fc1db7d976d0160321882c2d93295bcbe0af50c219afebebe9285a18a43.scope - libcontainer container 6cb73fc1db7d976d0160321882c2d93295bcbe0af50c219afebebe9285a18a43. Mar 17 17:27:38.084813 containerd[1939]: time="2025-03-17T17:27:38.084637880Z" level=info msg="StartContainer for \"6cb73fc1db7d976d0160321882c2d93295bcbe0af50c219afebebe9285a18a43\" returns successfully" Mar 17 17:27:38.089922 systemd[1]: cri-containerd-6cb73fc1db7d976d0160321882c2d93295bcbe0af50c219afebebe9285a18a43.scope: Deactivated successfully. Mar 17 17:27:38.138275 containerd[1939]: time="2025-03-17T17:27:38.138173276Z" level=info msg="shim disconnected" id=6cb73fc1db7d976d0160321882c2d93295bcbe0af50c219afebebe9285a18a43 namespace=k8s.io Mar 17 17:27:38.138275 containerd[1939]: time="2025-03-17T17:27:38.138253076Z" level=warning msg="cleaning up after shim disconnected" id=6cb73fc1db7d976d0160321882c2d93295bcbe0af50c219afebebe9285a18a43 namespace=k8s.io Mar 17 17:27:38.138275 containerd[1939]: time="2025-03-17T17:27:38.138278324Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:38.726093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6cb73fc1db7d976d0160321882c2d93295bcbe0af50c219afebebe9285a18a43-rootfs.mount: Deactivated successfully. Mar 17 17:27:38.945366 containerd[1939]: time="2025-03-17T17:27:38.945293748Z" level=info msg="CreateContainer within sandbox \"b017372276c63be655403cae60ae67eecc034334e2afb0e5435775b5167622dc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:27:38.976106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount487997585.mount: Deactivated successfully. Mar 17 17:27:38.982564 containerd[1939]: time="2025-03-17T17:27:38.982500804Z" level=info msg="CreateContainer within sandbox \"b017372276c63be655403cae60ae67eecc034334e2afb0e5435775b5167622dc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fd2e33d2ac01e0bd5e66dcbdb538fefd583f98ee40b8cde8e22d2e034dde11b7\"" Mar 17 17:27:38.983408 containerd[1939]: time="2025-03-17T17:27:38.983359356Z" level=info msg="StartContainer for \"fd2e33d2ac01e0bd5e66dcbdb538fefd583f98ee40b8cde8e22d2e034dde11b7\"" Mar 17 17:27:39.049368 systemd[1]: Started cri-containerd-fd2e33d2ac01e0bd5e66dcbdb538fefd583f98ee40b8cde8e22d2e034dde11b7.scope - libcontainer container fd2e33d2ac01e0bd5e66dcbdb538fefd583f98ee40b8cde8e22d2e034dde11b7. Mar 17 17:27:39.109398 systemd[1]: cri-containerd-fd2e33d2ac01e0bd5e66dcbdb538fefd583f98ee40b8cde8e22d2e034dde11b7.scope: Deactivated successfully. Mar 17 17:27:39.113533 containerd[1939]: time="2025-03-17T17:27:39.113471601Z" level=info msg="StartContainer for \"fd2e33d2ac01e0bd5e66dcbdb538fefd583f98ee40b8cde8e22d2e034dde11b7\" returns successfully" Mar 17 17:27:39.159532 containerd[1939]: time="2025-03-17T17:27:39.159441213Z" level=info msg="shim disconnected" id=fd2e33d2ac01e0bd5e66dcbdb538fefd583f98ee40b8cde8e22d2e034dde11b7 namespace=k8s.io Mar 17 17:27:39.159532 containerd[1939]: time="2025-03-17T17:27:39.159519933Z" level=warning msg="cleaning up after shim disconnected" id=fd2e33d2ac01e0bd5e66dcbdb538fefd583f98ee40b8cde8e22d2e034dde11b7 namespace=k8s.io Mar 17 17:27:39.159830 containerd[1939]: time="2025-03-17T17:27:39.159545469Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:39.726219 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd2e33d2ac01e0bd5e66dcbdb538fefd583f98ee40b8cde8e22d2e034dde11b7-rootfs.mount: Deactivated successfully. Mar 17 17:27:39.953622 containerd[1939]: time="2025-03-17T17:27:39.953555713Z" level=info msg="CreateContainer within sandbox \"b017372276c63be655403cae60ae67eecc034334e2afb0e5435775b5167622dc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:27:39.987987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4052508939.mount: Deactivated successfully. Mar 17 17:27:39.996132 containerd[1939]: time="2025-03-17T17:27:39.995257717Z" level=info msg="CreateContainer within sandbox \"b017372276c63be655403cae60ae67eecc034334e2afb0e5435775b5167622dc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6fa6963b62b4541e501b7393aa932d33bdd1212e58418717b30b11878d17a14d\"" Mar 17 17:27:39.997109 containerd[1939]: time="2025-03-17T17:27:39.996607429Z" level=info msg="StartContainer for \"6fa6963b62b4541e501b7393aa932d33bdd1212e58418717b30b11878d17a14d\"" Mar 17 17:27:40.057348 systemd[1]: Started cri-containerd-6fa6963b62b4541e501b7393aa932d33bdd1212e58418717b30b11878d17a14d.scope - libcontainer container 6fa6963b62b4541e501b7393aa932d33bdd1212e58418717b30b11878d17a14d. Mar 17 17:27:40.186389 containerd[1939]: time="2025-03-17T17:27:40.185435950Z" level=info msg="StartContainer for \"6fa6963b62b4541e501b7393aa932d33bdd1212e58418717b30b11878d17a14d\" returns successfully" Mar 17 17:27:41.005068 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 17 17:27:45.040169 systemd[1]: run-containerd-runc-k8s.io-6fa6963b62b4541e501b7393aa932d33bdd1212e58418717b30b11878d17a14d-runc.rvKbCu.mount: Deactivated successfully. Mar 17 17:27:45.249069 (udev-worker)[6222]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:27:45.250828 (udev-worker)[6221]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:27:45.264372 systemd-networkd[1850]: lxc_health: Link UP Mar 17 17:27:45.279960 systemd-networkd[1850]: lxc_health: Gained carrier Mar 17 17:27:45.996535 kubelet[3536]: I0317 17:27:45.996458 3536 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rkwc4" podStartSLOduration=10.996437911 podStartE2EDuration="10.996437911s" podCreationTimestamp="2025-03-17 17:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:27:41.00011023 +0000 UTC m=+129.917589658" watchObservedRunningTime="2025-03-17 17:27:45.996437911 +0000 UTC m=+134.913917363" Mar 17 17:27:46.340324 systemd-networkd[1850]: lxc_health: Gained IPv6LL Mar 17 17:27:48.901145 ntpd[1911]: Listen normally on 15 lxc_health [fe80::a0e5:4ff:fe1b:a5b4%14]:123 Mar 17 17:27:48.901954 ntpd[1911]: 17 Mar 17:27:48 ntpd[1911]: Listen normally on 15 lxc_health [fe80::a0e5:4ff:fe1b:a5b4%14]:123 Mar 17 17:27:52.009279 sshd[5459]: Connection closed by 139.178.68.195 port 56528 Mar 17 17:27:52.010533 sshd-session[5395]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:52.018604 systemd[1]: sshd@31-172.31.30.87:22-139.178.68.195:56528.service: Deactivated successfully. Mar 17 17:27:52.024320 systemd[1]: session-32.scope: Deactivated successfully. Mar 17 17:27:52.030732 systemd-logind[1916]: Session 32 logged out. Waiting for processes to exit. Mar 17 17:27:52.034325 systemd-logind[1916]: Removed session 32. Mar 17 17:28:05.521798 systemd[1]: cri-containerd-e0dd82a2bb20f06dde3a7730c5db6c6400a35e77b6c88616205eb881f5995cc0.scope: Deactivated successfully. Mar 17 17:28:05.522958 systemd[1]: cri-containerd-e0dd82a2bb20f06dde3a7730c5db6c6400a35e77b6c88616205eb881f5995cc0.scope: Consumed 4.509s CPU time, 22.2M memory peak, 0B memory swap peak. Mar 17 17:28:05.559950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0dd82a2bb20f06dde3a7730c5db6c6400a35e77b6c88616205eb881f5995cc0-rootfs.mount: Deactivated successfully. Mar 17 17:28:05.573698 containerd[1939]: time="2025-03-17T17:28:05.573601512Z" level=info msg="shim disconnected" id=e0dd82a2bb20f06dde3a7730c5db6c6400a35e77b6c88616205eb881f5995cc0 namespace=k8s.io Mar 17 17:28:05.573698 containerd[1939]: time="2025-03-17T17:28:05.573679572Z" level=warning msg="cleaning up after shim disconnected" id=e0dd82a2bb20f06dde3a7730c5db6c6400a35e77b6c88616205eb881f5995cc0 namespace=k8s.io Mar 17 17:28:05.573698 containerd[1939]: time="2025-03-17T17:28:05.573699096Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:28:05.594217 containerd[1939]: time="2025-03-17T17:28:05.594149700Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:28:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:28:06.036574 kubelet[3536]: I0317 17:28:06.036510 3536 scope.go:117] "RemoveContainer" containerID="e0dd82a2bb20f06dde3a7730c5db6c6400a35e77b6c88616205eb881f5995cc0" Mar 17 17:28:06.041092 containerd[1939]: time="2025-03-17T17:28:06.041010659Z" level=info msg="CreateContainer within sandbox \"80a5ffb016f16d0825d0d35cf18c3e2568aa97fb18d4929e3df6a4ebecc9ed4a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 17 17:28:06.070806 containerd[1939]: time="2025-03-17T17:28:06.070748903Z" level=info msg="CreateContainer within sandbox \"80a5ffb016f16d0825d0d35cf18c3e2568aa97fb18d4929e3df6a4ebecc9ed4a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b56db61ea52866cde8410d08217009ee26a2c197e2da4b06c9f6b5e8104942a0\"" Mar 17 17:28:06.071815 containerd[1939]: time="2025-03-17T17:28:06.071713259Z" level=info msg="StartContainer for \"b56db61ea52866cde8410d08217009ee26a2c197e2da4b06c9f6b5e8104942a0\"" Mar 17 17:28:06.134348 systemd[1]: Started cri-containerd-b56db61ea52866cde8410d08217009ee26a2c197e2da4b06c9f6b5e8104942a0.scope - libcontainer container b56db61ea52866cde8410d08217009ee26a2c197e2da4b06c9f6b5e8104942a0. Mar 17 17:28:06.206366 containerd[1939]: time="2025-03-17T17:28:06.206296440Z" level=info msg="StartContainer for \"b56db61ea52866cde8410d08217009ee26a2c197e2da4b06c9f6b5e8104942a0\" returns successfully" Mar 17 17:28:06.563598 systemd[1]: run-containerd-runc-k8s.io-b56db61ea52866cde8410d08217009ee26a2c197e2da4b06c9f6b5e8104942a0-runc.JEVOrt.mount: Deactivated successfully. Mar 17 17:28:10.288592 systemd[1]: cri-containerd-349971740e99e48a3e1c94a99d8c471aff1d536d7305a1e037424c2d401d53c3.scope: Deactivated successfully. Mar 17 17:28:10.289891 systemd[1]: cri-containerd-349971740e99e48a3e1c94a99d8c471aff1d536d7305a1e037424c2d401d53c3.scope: Consumed 4.282s CPU time, 15.7M memory peak, 0B memory swap peak. Mar 17 17:28:10.333437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-349971740e99e48a3e1c94a99d8c471aff1d536d7305a1e037424c2d401d53c3-rootfs.mount: Deactivated successfully. Mar 17 17:28:10.346247 containerd[1939]: time="2025-03-17T17:28:10.346136980Z" level=info msg="shim disconnected" id=349971740e99e48a3e1c94a99d8c471aff1d536d7305a1e037424c2d401d53c3 namespace=k8s.io Mar 17 17:28:10.346247 containerd[1939]: time="2025-03-17T17:28:10.346219468Z" level=warning msg="cleaning up after shim disconnected" id=349971740e99e48a3e1c94a99d8c471aff1d536d7305a1e037424c2d401d53c3 namespace=k8s.io Mar 17 17:28:10.346247 containerd[1939]: time="2025-03-17T17:28:10.346241212Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:28:11.057798 kubelet[3536]: I0317 17:28:11.057179 3536 scope.go:117] "RemoveContainer" containerID="349971740e99e48a3e1c94a99d8c471aff1d536d7305a1e037424c2d401d53c3" Mar 17 17:28:11.060683 containerd[1939]: time="2025-03-17T17:28:11.060635116Z" level=info msg="CreateContainer within sandbox \"16197dd13e34c51dd85054734fe7db2cde17df76e0ef24cbb5c6f9a05f2ff1e9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 17 17:28:11.091736 containerd[1939]: time="2025-03-17T17:28:11.091542028Z" level=info msg="CreateContainer within sandbox \"16197dd13e34c51dd85054734fe7db2cde17df76e0ef24cbb5c6f9a05f2ff1e9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a675a0c8f24f413286d7ad57cb15a75ce40f66fdbe97b3ff023e857247175448\"" Mar 17 17:28:11.092412 containerd[1939]: time="2025-03-17T17:28:11.092365900Z" level=info msg="StartContainer for \"a675a0c8f24f413286d7ad57cb15a75ce40f66fdbe97b3ff023e857247175448\"" Mar 17 17:28:11.145344 systemd[1]: Started cri-containerd-a675a0c8f24f413286d7ad57cb15a75ce40f66fdbe97b3ff023e857247175448.scope - libcontainer container a675a0c8f24f413286d7ad57cb15a75ce40f66fdbe97b3ff023e857247175448. Mar 17 17:28:11.208040 containerd[1939]: time="2025-03-17T17:28:11.207944944Z" level=info msg="StartContainer for \"a675a0c8f24f413286d7ad57cb15a75ce40f66fdbe97b3ff023e857247175448\" returns successfully" Mar 17 17:28:14.320490 kubelet[3536]: E0317 17:28:14.319668 3536 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-87?timeout=10s\": context deadline exceeded" Mar 17 17:28:24.321723 kubelet[3536]: E0317 17:28:24.321407 3536 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-87?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"