Mar 19 11:33:15.190344 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 19 11:33:15.190390 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed Mar 19 10:15:40 -00 2025 Mar 19 11:33:15.190414 kernel: KASLR disabled due to lack of seed Mar 19 11:33:15.190430 kernel: efi: EFI v2.7 by EDK II Mar 19 11:33:15.190446 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Mar 19 11:33:15.190461 kernel: secureboot: Secure boot disabled Mar 19 11:33:15.190478 kernel: ACPI: Early table checksum verification disabled Mar 19 11:33:15.190493 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 19 11:33:15.190508 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 19 11:33:15.190523 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 19 11:33:15.190543 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Mar 19 11:33:15.190559 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 19 11:33:15.190573 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 19 11:33:15.190589 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 19 11:33:15.190607 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 19 11:33:15.190627 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 19 11:33:15.190644 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 19 11:33:15.190660 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 19 11:33:15.190676 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 19 11:33:15.190692 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 19 11:33:15.190708 kernel: printk: bootconsole [uart0] enabled Mar 19 11:33:15.190723 kernel: NUMA: Failed to initialise from firmware Mar 19 11:33:15.190739 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 19 11:33:15.190755 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Mar 19 11:33:15.190771 kernel: Zone ranges: Mar 19 11:33:15.190787 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 19 11:33:15.190807 kernel: DMA32 empty Mar 19 11:33:15.190823 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 19 11:33:15.190839 kernel: Movable zone start for each node Mar 19 11:33:15.190854 kernel: Early memory node ranges Mar 19 11:33:15.190870 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 19 11:33:15.190885 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 19 11:33:15.190901 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 19 11:33:15.190917 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 19 11:33:15.190932 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 19 11:33:15.190948 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 19 11:33:15.190964 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 19 11:33:15.190979 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 19 11:33:15.191000 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 19 11:33:15.191016 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 19 11:33:15.191039 kernel: psci: probing for conduit method from ACPI. Mar 19 11:33:15.191056 kernel: psci: PSCIv1.0 detected in firmware. Mar 19 11:33:15.191104 kernel: psci: Using standard PSCI v0.2 function IDs Mar 19 11:33:15.191128 kernel: psci: Trusted OS migration not required Mar 19 11:33:15.191146 kernel: psci: SMC Calling Convention v1.1 Mar 19 11:33:15.191163 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 19 11:33:15.191199 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 19 11:33:15.191217 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 19 11:33:15.191234 kernel: Detected PIPT I-cache on CPU0 Mar 19 11:33:15.191251 kernel: CPU features: detected: GIC system register CPU interface Mar 19 11:33:15.191268 kernel: CPU features: detected: Spectre-v2 Mar 19 11:33:15.191284 kernel: CPU features: detected: Spectre-v3a Mar 19 11:33:15.191301 kernel: CPU features: detected: Spectre-BHB Mar 19 11:33:15.191317 kernel: CPU features: detected: ARM erratum 1742098 Mar 19 11:33:15.191334 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 19 11:33:15.191357 kernel: alternatives: applying boot alternatives Mar 19 11:33:15.191376 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=41cc5bdd62754423bbb4bbec0e0356d6a1ab5d0ac0f2396a30318f9fb189e7eb Mar 19 11:33:15.191394 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 19 11:33:15.191410 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 19 11:33:15.191427 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 19 11:33:15.191444 kernel: Fallback order for Node 0: 0 Mar 19 11:33:15.191460 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 19 11:33:15.191477 kernel: Policy zone: Normal Mar 19 11:33:15.191493 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 19 11:33:15.191510 kernel: software IO TLB: area num 2. Mar 19 11:33:15.191531 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 19 11:33:15.191548 kernel: Memory: 3821240K/4030464K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38336K init, 897K bss, 209224K reserved, 0K cma-reserved) Mar 19 11:33:15.191565 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 19 11:33:15.191582 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 19 11:33:15.191600 kernel: rcu: RCU event tracing is enabled. Mar 19 11:33:15.191617 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 19 11:33:15.191634 kernel: Trampoline variant of Tasks RCU enabled. Mar 19 11:33:15.191651 kernel: Tracing variant of Tasks RCU enabled. Mar 19 11:33:15.191668 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 19 11:33:15.191685 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 19 11:33:15.191701 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 19 11:33:15.191723 kernel: GICv3: 96 SPIs implemented Mar 19 11:33:15.191739 kernel: GICv3: 0 Extended SPIs implemented Mar 19 11:33:15.191756 kernel: Root IRQ handler: gic_handle_irq Mar 19 11:33:15.191772 kernel: GICv3: GICv3 features: 16 PPIs Mar 19 11:33:15.191789 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 19 11:33:15.191805 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 19 11:33:15.191822 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Mar 19 11:33:15.191839 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Mar 19 11:33:15.191856 kernel: GICv3: using LPI property table @0x00000004000d0000 Mar 19 11:33:15.191872 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 19 11:33:15.191889 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Mar 19 11:33:15.191906 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 19 11:33:15.191927 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 19 11:33:15.191944 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 19 11:33:15.191961 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 19 11:33:15.191978 kernel: Console: colour dummy device 80x25 Mar 19 11:33:15.191995 kernel: printk: console [tty1] enabled Mar 19 11:33:15.192012 kernel: ACPI: Core revision 20230628 Mar 19 11:33:15.192029 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 19 11:33:15.192046 kernel: pid_max: default: 32768 minimum: 301 Mar 19 11:33:15.194110 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 19 11:33:15.194152 kernel: landlock: Up and running. Mar 19 11:33:15.194181 kernel: SELinux: Initializing. Mar 19 11:33:15.194200 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 19 11:33:15.196432 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 19 11:33:15.196455 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 19 11:33:15.196473 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 19 11:33:15.196491 kernel: rcu: Hierarchical SRCU implementation. Mar 19 11:33:15.196509 kernel: rcu: Max phase no-delay instances is 400. Mar 19 11:33:15.196527 kernel: Platform MSI: ITS@0x10080000 domain created Mar 19 11:33:15.196552 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 19 11:33:15.196570 kernel: Remapping and enabling EFI services. Mar 19 11:33:15.196587 kernel: smp: Bringing up secondary CPUs ... Mar 19 11:33:15.196604 kernel: Detected PIPT I-cache on CPU1 Mar 19 11:33:15.196621 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 19 11:33:15.196638 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Mar 19 11:33:15.196655 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 19 11:33:15.196672 kernel: smp: Brought up 1 node, 2 CPUs Mar 19 11:33:15.196689 kernel: SMP: Total of 2 processors activated. Mar 19 11:33:15.196706 kernel: CPU features: detected: 32-bit EL0 Support Mar 19 11:33:15.196728 kernel: CPU features: detected: 32-bit EL1 Support Mar 19 11:33:15.196746 kernel: CPU features: detected: CRC32 instructions Mar 19 11:33:15.196774 kernel: CPU: All CPU(s) started at EL1 Mar 19 11:33:15.196797 kernel: alternatives: applying system-wide alternatives Mar 19 11:33:15.196815 kernel: devtmpfs: initialized Mar 19 11:33:15.196833 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 19 11:33:15.196850 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 19 11:33:15.196869 kernel: pinctrl core: initialized pinctrl subsystem Mar 19 11:33:15.196887 kernel: SMBIOS 3.0.0 present. Mar 19 11:33:15.196909 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 19 11:33:15.196927 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 19 11:33:15.196945 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 19 11:33:15.196963 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 19 11:33:15.196982 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 19 11:33:15.197000 kernel: audit: initializing netlink subsys (disabled) Mar 19 11:33:15.197018 kernel: audit: type=2000 audit(0.218:1): state=initialized audit_enabled=0 res=1 Mar 19 11:33:15.197043 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 19 11:33:15.197079 kernel: cpuidle: using governor menu Mar 19 11:33:15.197103 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 19 11:33:15.197122 kernel: ASID allocator initialised with 65536 entries Mar 19 11:33:15.197141 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 19 11:33:15.197159 kernel: Serial: AMBA PL011 UART driver Mar 19 11:33:15.197178 kernel: Modules: 17760 pages in range for non-PLT usage Mar 19 11:33:15.197196 kernel: Modules: 509280 pages in range for PLT usage Mar 19 11:33:15.197214 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 19 11:33:15.197239 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 19 11:33:15.197258 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 19 11:33:15.197276 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 19 11:33:15.197295 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 19 11:33:15.197313 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 19 11:33:15.197332 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 19 11:33:15.197351 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 19 11:33:15.197369 kernel: ACPI: Added _OSI(Module Device) Mar 19 11:33:15.197387 kernel: ACPI: Added _OSI(Processor Device) Mar 19 11:33:15.197411 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 19 11:33:15.197430 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 19 11:33:15.197448 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 19 11:33:15.197466 kernel: ACPI: Interpreter enabled Mar 19 11:33:15.197484 kernel: ACPI: Using GIC for interrupt routing Mar 19 11:33:15.197502 kernel: ACPI: MCFG table detected, 1 entries Mar 19 11:33:15.197520 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Mar 19 11:33:15.197823 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 19 11:33:15.198036 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 19 11:33:15.202280 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 19 11:33:15.202519 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Mar 19 11:33:15.202734 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Mar 19 11:33:15.202761 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 19 11:33:15.202780 kernel: acpiphp: Slot [1] registered Mar 19 11:33:15.202799 kernel: acpiphp: Slot [2] registered Mar 19 11:33:15.202817 kernel: acpiphp: Slot [3] registered Mar 19 11:33:15.202846 kernel: acpiphp: Slot [4] registered Mar 19 11:33:15.202865 kernel: acpiphp: Slot [5] registered Mar 19 11:33:15.202883 kernel: acpiphp: Slot [6] registered Mar 19 11:33:15.202901 kernel: acpiphp: Slot [7] registered Mar 19 11:33:15.202919 kernel: acpiphp: Slot [8] registered Mar 19 11:33:15.202937 kernel: acpiphp: Slot [9] registered Mar 19 11:33:15.202955 kernel: acpiphp: Slot [10] registered Mar 19 11:33:15.202973 kernel: acpiphp: Slot [11] registered Mar 19 11:33:15.202991 kernel: acpiphp: Slot [12] registered Mar 19 11:33:15.203009 kernel: acpiphp: Slot [13] registered Mar 19 11:33:15.203032 kernel: acpiphp: Slot [14] registered Mar 19 11:33:15.203050 kernel: acpiphp: Slot [15] registered Mar 19 11:33:15.203109 kernel: acpiphp: Slot [16] registered Mar 19 11:33:15.203131 kernel: acpiphp: Slot [17] registered Mar 19 11:33:15.203149 kernel: acpiphp: Slot [18] registered Mar 19 11:33:15.203183 kernel: acpiphp: Slot [19] registered Mar 19 11:33:15.203207 kernel: acpiphp: Slot [20] registered Mar 19 11:33:15.203226 kernel: acpiphp: Slot [21] registered Mar 19 11:33:15.203244 kernel: acpiphp: Slot [22] registered Mar 19 11:33:15.203269 kernel: acpiphp: Slot [23] registered Mar 19 11:33:15.203287 kernel: acpiphp: Slot [24] registered Mar 19 11:33:15.203305 kernel: acpiphp: Slot [25] registered Mar 19 11:33:15.203323 kernel: acpiphp: Slot [26] registered Mar 19 11:33:15.203340 kernel: acpiphp: Slot [27] registered Mar 19 11:33:15.203358 kernel: acpiphp: Slot [28] registered Mar 19 11:33:15.203376 kernel: acpiphp: Slot [29] registered Mar 19 11:33:15.203394 kernel: acpiphp: Slot [30] registered Mar 19 11:33:15.203411 kernel: acpiphp: Slot [31] registered Mar 19 11:33:15.203429 kernel: PCI host bridge to bus 0000:00 Mar 19 11:33:15.203678 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 19 11:33:15.203868 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 19 11:33:15.204099 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 19 11:33:15.204308 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Mar 19 11:33:15.204544 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 19 11:33:15.204779 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 19 11:33:15.205017 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 19 11:33:15.205470 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 19 11:33:15.205705 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 19 11:33:15.205923 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 19 11:33:15.206618 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 19 11:33:15.206837 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 19 11:33:15.207046 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 19 11:33:15.207347 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 19 11:33:15.207553 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 19 11:33:15.207756 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Mar 19 11:33:15.208608 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Mar 19 11:33:15.208840 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Mar 19 11:33:15.209053 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Mar 19 11:33:15.209358 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Mar 19 11:33:15.209554 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 19 11:33:15.209735 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 19 11:33:15.209916 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 19 11:33:15.209940 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 19 11:33:15.209959 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 19 11:33:15.209978 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 19 11:33:15.209997 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 19 11:33:15.210016 kernel: iommu: Default domain type: Translated Mar 19 11:33:15.210040 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 19 11:33:15.210059 kernel: efivars: Registered efivars operations Mar 19 11:33:15.210126 kernel: vgaarb: loaded Mar 19 11:33:15.210146 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 19 11:33:15.210165 kernel: VFS: Disk quotas dquot_6.6.0 Mar 19 11:33:15.210184 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 19 11:33:15.210202 kernel: pnp: PnP ACPI init Mar 19 11:33:15.210426 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 19 11:33:15.210462 kernel: pnp: PnP ACPI: found 1 devices Mar 19 11:33:15.214140 kernel: NET: Registered PF_INET protocol family Mar 19 11:33:15.214183 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 19 11:33:15.214202 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 19 11:33:15.214221 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 19 11:33:15.214239 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 19 11:33:15.214258 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 19 11:33:15.214276 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 19 11:33:15.214294 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 19 11:33:15.214321 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 19 11:33:15.214340 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 19 11:33:15.214358 kernel: PCI: CLS 0 bytes, default 64 Mar 19 11:33:15.214375 kernel: kvm [1]: HYP mode not available Mar 19 11:33:15.214393 kernel: Initialise system trusted keyrings Mar 19 11:33:15.214412 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 19 11:33:15.214429 kernel: Key type asymmetric registered Mar 19 11:33:15.214447 kernel: Asymmetric key parser 'x509' registered Mar 19 11:33:15.214465 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 19 11:33:15.214487 kernel: io scheduler mq-deadline registered Mar 19 11:33:15.214506 kernel: io scheduler kyber registered Mar 19 11:33:15.214523 kernel: io scheduler bfq registered Mar 19 11:33:15.214796 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 19 11:33:15.214826 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 19 11:33:15.214845 kernel: ACPI: button: Power Button [PWRB] Mar 19 11:33:15.214864 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 19 11:33:15.214882 kernel: ACPI: button: Sleep Button [SLPB] Mar 19 11:33:15.214907 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 19 11:33:15.214927 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 19 11:33:15.215187 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 19 11:33:15.215217 kernel: printk: console [ttyS0] disabled Mar 19 11:33:15.215237 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 19 11:33:15.215255 kernel: printk: console [ttyS0] enabled Mar 19 11:33:15.215274 kernel: printk: bootconsole [uart0] disabled Mar 19 11:33:15.215292 kernel: thunder_xcv, ver 1.0 Mar 19 11:33:15.215311 kernel: thunder_bgx, ver 1.0 Mar 19 11:33:15.215329 kernel: nicpf, ver 1.0 Mar 19 11:33:15.215355 kernel: nicvf, ver 1.0 Mar 19 11:33:15.215590 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 19 11:33:15.215792 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-19T11:33:14 UTC (1742383994) Mar 19 11:33:15.215819 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 19 11:33:15.215837 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 19 11:33:15.215856 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 19 11:33:15.215874 kernel: watchdog: Hard watchdog permanently disabled Mar 19 11:33:15.215899 kernel: NET: Registered PF_INET6 protocol family Mar 19 11:33:15.215917 kernel: Segment Routing with IPv6 Mar 19 11:33:15.215936 kernel: In-situ OAM (IOAM) with IPv6 Mar 19 11:33:15.215955 kernel: NET: Registered PF_PACKET protocol family Mar 19 11:33:15.215973 kernel: Key type dns_resolver registered Mar 19 11:33:15.215991 kernel: registered taskstats version 1 Mar 19 11:33:15.216008 kernel: Loading compiled-in X.509 certificates Mar 19 11:33:15.216027 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 36392d496708ee63c4af5364493015d5256162ff' Mar 19 11:33:15.216045 kernel: Key type .fscrypt registered Mar 19 11:33:15.224129 kernel: Key type fscrypt-provisioning registered Mar 19 11:33:15.224191 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 19 11:33:15.224212 kernel: ima: Allocated hash algorithm: sha1 Mar 19 11:33:15.224231 kernel: ima: No architecture policies found Mar 19 11:33:15.224249 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 19 11:33:15.224267 kernel: clk: Disabling unused clocks Mar 19 11:33:15.224285 kernel: Freeing unused kernel memory: 38336K Mar 19 11:33:15.224303 kernel: Run /init as init process Mar 19 11:33:15.224321 kernel: with arguments: Mar 19 11:33:15.224339 kernel: /init Mar 19 11:33:15.224362 kernel: with environment: Mar 19 11:33:15.224380 kernel: HOME=/ Mar 19 11:33:15.224398 kernel: TERM=linux Mar 19 11:33:15.224416 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 19 11:33:15.224437 systemd[1]: Successfully made /usr/ read-only. Mar 19 11:33:15.224462 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:33:15.224483 systemd[1]: Detected virtualization amazon. Mar 19 11:33:15.224508 systemd[1]: Detected architecture arm64. Mar 19 11:33:15.224527 systemd[1]: Running in initrd. Mar 19 11:33:15.224546 systemd[1]: No hostname configured, using default hostname. Mar 19 11:33:15.224566 systemd[1]: Hostname set to . Mar 19 11:33:15.224586 systemd[1]: Initializing machine ID from VM UUID. Mar 19 11:33:15.224605 systemd[1]: Queued start job for default target initrd.target. Mar 19 11:33:15.224625 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:33:15.224645 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:33:15.224666 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 19 11:33:15.224694 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:33:15.224715 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 19 11:33:15.224737 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 19 11:33:15.224760 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 19 11:33:15.224782 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 19 11:33:15.224803 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:33:15.224829 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:33:15.224852 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:33:15.224872 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:33:15.224892 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:33:15.224913 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:33:15.224933 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:33:15.224953 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:33:15.224973 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 19 11:33:15.224994 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 19 11:33:15.225021 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:33:15.225042 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:33:15.225205 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:33:15.225236 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:33:15.225256 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 19 11:33:15.225277 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:33:15.225297 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 19 11:33:15.225317 systemd[1]: Starting systemd-fsck-usr.service... Mar 19 11:33:15.225345 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:33:15.225366 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:33:15.225386 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:33:15.225406 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 19 11:33:15.225426 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:33:15.225448 systemd[1]: Finished systemd-fsck-usr.service. Mar 19 11:33:15.225474 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 11:33:15.225494 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:33:15.225563 systemd-journald[252]: Collecting audit messages is disabled. Mar 19 11:33:15.225612 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:33:15.225633 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:33:15.225653 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 19 11:33:15.225673 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:33:15.225692 kernel: Bridge firewalling registered Mar 19 11:33:15.225711 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:33:15.225731 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:33:15.225751 systemd-journald[252]: Journal started Mar 19 11:33:15.225792 systemd-journald[252]: Runtime Journal (/run/log/journal/ec214e3cdffbb70788421b4293af4220) is 8M, max 75.3M, 67.3M free. Mar 19 11:33:15.165515 systemd-modules-load[253]: Inserted module 'overlay' Mar 19 11:33:15.214701 systemd-modules-load[253]: Inserted module 'br_netfilter' Mar 19 11:33:15.246113 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:33:15.249340 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:33:15.267378 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:33:15.273535 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:33:15.280149 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:33:15.291670 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:33:15.304845 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 19 11:33:15.327434 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:33:15.343113 dracut-cmdline[290]: dracut-dracut-053 Mar 19 11:33:15.349177 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=41cc5bdd62754423bbb4bbec0e0356d6a1ab5d0ac0f2396a30318f9fb189e7eb Mar 19 11:33:15.400599 systemd-resolved[295]: Positive Trust Anchors: Mar 19 11:33:15.400634 systemd-resolved[295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:33:15.400697 systemd-resolved[295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:33:15.502105 kernel: SCSI subsystem initialized Mar 19 11:33:15.511093 kernel: Loading iSCSI transport class v2.0-870. Mar 19 11:33:15.521090 kernel: iscsi: registered transport (tcp) Mar 19 11:33:15.543235 kernel: iscsi: registered transport (qla4xxx) Mar 19 11:33:15.543321 kernel: QLogic iSCSI HBA Driver Mar 19 11:33:15.632101 kernel: random: crng init done Mar 19 11:33:15.632309 systemd-resolved[295]: Defaulting to hostname 'linux'. Mar 19 11:33:15.635806 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:33:15.638331 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:33:15.662411 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 19 11:33:15.673414 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 19 11:33:15.717350 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 19 11:33:15.717430 kernel: device-mapper: uevent: version 1.0.3 Mar 19 11:33:15.717456 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 19 11:33:15.782122 kernel: raid6: neonx8 gen() 6607 MB/s Mar 19 11:33:15.799095 kernel: raid6: neonx4 gen() 6509 MB/s Mar 19 11:33:15.816094 kernel: raid6: neonx2 gen() 5395 MB/s Mar 19 11:33:15.833095 kernel: raid6: neonx1 gen() 3937 MB/s Mar 19 11:33:15.850094 kernel: raid6: int64x8 gen() 3600 MB/s Mar 19 11:33:15.867095 kernel: raid6: int64x4 gen() 3688 MB/s Mar 19 11:33:15.884094 kernel: raid6: int64x2 gen() 3572 MB/s Mar 19 11:33:15.901838 kernel: raid6: int64x1 gen() 2768 MB/s Mar 19 11:33:15.901869 kernel: raid6: using algorithm neonx8 gen() 6607 MB/s Mar 19 11:33:15.919836 kernel: raid6: .... xor() 4814 MB/s, rmw enabled Mar 19 11:33:15.919875 kernel: raid6: using neon recovery algorithm Mar 19 11:33:15.927756 kernel: xor: measuring software checksum speed Mar 19 11:33:15.927815 kernel: 8regs : 12943 MB/sec Mar 19 11:33:15.928815 kernel: 32regs : 13039 MB/sec Mar 19 11:33:15.930898 kernel: arm64_neon : 8983 MB/sec Mar 19 11:33:15.930931 kernel: xor: using function: 32regs (13039 MB/sec) Mar 19 11:33:16.013110 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 19 11:33:16.031626 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:33:16.041378 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:33:16.086134 systemd-udevd[475]: Using default interface naming scheme 'v255'. Mar 19 11:33:16.096237 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:33:16.122333 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 19 11:33:16.149736 dracut-pre-trigger[487]: rd.md=0: removing MD RAID activation Mar 19 11:33:16.205683 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:33:16.216394 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:33:16.343875 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:33:16.356340 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 19 11:33:16.411597 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 19 11:33:16.422346 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:33:16.432335 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:33:16.441580 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:33:16.457405 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 19 11:33:16.487194 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:33:16.562949 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 19 11:33:16.563013 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 19 11:33:16.584014 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 19 11:33:16.584323 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 19 11:33:16.584568 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:1f:6b:94:81:0b Mar 19 11:33:16.567267 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:33:16.567494 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:33:16.572703 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:33:16.575202 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:33:16.575481 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:33:16.578044 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:33:16.592940 (udev-worker)[531]: Network interface NamePolicy= disabled on kernel command line. Mar 19 11:33:16.608604 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:33:16.613475 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:33:16.634100 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 19 11:33:16.636093 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 19 11:33:16.646084 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 19 11:33:16.651315 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:33:16.657712 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 19 11:33:16.657763 kernel: GPT:9289727 != 16777215 Mar 19 11:33:16.658925 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 19 11:33:16.659713 kernel: GPT:9289727 != 16777215 Mar 19 11:33:16.660737 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 19 11:33:16.661647 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 19 11:33:16.662617 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:33:16.700733 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:33:16.805544 kernel: BTRFS: device fsid 7c80927c-98c3-4e81-a933-b7f5e1234bd2 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (520) Mar 19 11:33:16.815156 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (521) Mar 19 11:33:16.887395 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 19 11:33:16.929749 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 19 11:33:16.966196 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 19 11:33:16.971367 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 19 11:33:16.994963 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 19 11:33:17.009383 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 19 11:33:17.022369 disk-uuid[665]: Primary Header is updated. Mar 19 11:33:17.022369 disk-uuid[665]: Secondary Entries is updated. Mar 19 11:33:17.022369 disk-uuid[665]: Secondary Header is updated. Mar 19 11:33:17.034106 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 19 11:33:17.041090 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 19 11:33:18.049158 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 19 11:33:18.050366 disk-uuid[666]: The operation has completed successfully. Mar 19 11:33:18.246554 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 19 11:33:18.247940 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 19 11:33:18.336394 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 19 11:33:18.345432 sh[924]: Success Mar 19 11:33:18.370102 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 19 11:33:18.498576 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 19 11:33:18.504127 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 19 11:33:18.514280 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 19 11:33:18.548559 kernel: BTRFS info (device dm-0): first mount of filesystem 7c80927c-98c3-4e81-a933-b7f5e1234bd2 Mar 19 11:33:18.548621 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:33:18.548648 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 19 11:33:18.551386 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 19 11:33:18.551421 kernel: BTRFS info (device dm-0): using free space tree Mar 19 11:33:18.651107 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 19 11:33:18.666376 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 19 11:33:18.670321 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 19 11:33:18.683327 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 19 11:33:18.691445 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 19 11:33:18.725559 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:33:18.725643 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:33:18.726849 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 19 11:33:18.735291 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 19 11:33:18.751855 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 19 11:33:18.754396 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:33:18.770144 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 19 11:33:18.784093 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 19 11:33:18.869513 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:33:18.885365 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:33:18.942455 systemd-networkd[1117]: lo: Link UP Mar 19 11:33:18.942479 systemd-networkd[1117]: lo: Gained carrier Mar 19 11:33:18.947035 systemd-networkd[1117]: Enumeration completed Mar 19 11:33:18.947944 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:33:18.948396 systemd-networkd[1117]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:33:18.949077 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:33:18.960344 systemd-networkd[1117]: eth0: Link UP Mar 19 11:33:18.960359 systemd-networkd[1117]: eth0: Gained carrier Mar 19 11:33:18.960376 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:33:18.960377 systemd[1]: Reached target network.target - Network. Mar 19 11:33:18.990134 systemd-networkd[1117]: eth0: DHCPv4 address 172.31.31.152/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 19 11:33:19.152464 ignition[1038]: Ignition 2.20.0 Mar 19 11:33:19.152494 ignition[1038]: Stage: fetch-offline Mar 19 11:33:19.152939 ignition[1038]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:33:19.152965 ignition[1038]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 19 11:33:19.155618 ignition[1038]: Ignition finished successfully Mar 19 11:33:19.162678 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:33:19.173391 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 19 11:33:19.204504 ignition[1129]: Ignition 2.20.0 Mar 19 11:33:19.204526 ignition[1129]: Stage: fetch Mar 19 11:33:19.205124 ignition[1129]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:33:19.205151 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 19 11:33:19.205334 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 19 11:33:19.216275 ignition[1129]: PUT result: OK Mar 19 11:33:19.219026 ignition[1129]: parsed url from cmdline: "" Mar 19 11:33:19.219042 ignition[1129]: no config URL provided Mar 19 11:33:19.219057 ignition[1129]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 11:33:19.219104 ignition[1129]: no config at "/usr/lib/ignition/user.ign" Mar 19 11:33:19.219136 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 19 11:33:19.220854 ignition[1129]: PUT result: OK Mar 19 11:33:19.220928 ignition[1129]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 19 11:33:19.223237 ignition[1129]: GET result: OK Mar 19 11:33:19.236112 unknown[1129]: fetched base config from "system" Mar 19 11:33:19.223365 ignition[1129]: parsing config with SHA512: 5b2566ae39faf68543c779abdc9ee6267946a71c367122c749948d2005f4353c0d1d6bcf0d1ec44f6a7c0f9f05bb4d2e8558f0573a06566d1b2cdf2eac059e35 Mar 19 11:33:19.236130 unknown[1129]: fetched base config from "system" Mar 19 11:33:19.238922 ignition[1129]: fetch: fetch complete Mar 19 11:33:19.236153 unknown[1129]: fetched user config from "aws" Mar 19 11:33:19.238936 ignition[1129]: fetch: fetch passed Mar 19 11:33:19.245701 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 19 11:33:19.239059 ignition[1129]: Ignition finished successfully Mar 19 11:33:19.262403 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 19 11:33:19.294472 ignition[1135]: Ignition 2.20.0 Mar 19 11:33:19.294493 ignition[1135]: Stage: kargs Mar 19 11:33:19.295050 ignition[1135]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:33:19.295115 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 19 11:33:19.295288 ignition[1135]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 19 11:33:19.298659 ignition[1135]: PUT result: OK Mar 19 11:33:19.308350 ignition[1135]: kargs: kargs passed Mar 19 11:33:19.308622 ignition[1135]: Ignition finished successfully Mar 19 11:33:19.314109 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 19 11:33:19.326378 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 19 11:33:19.347913 ignition[1141]: Ignition 2.20.0 Mar 19 11:33:19.347941 ignition[1141]: Stage: disks Mar 19 11:33:19.348914 ignition[1141]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:33:19.349260 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 19 11:33:19.349616 ignition[1141]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 19 11:33:19.350623 ignition[1141]: PUT result: OK Mar 19 11:33:19.360846 ignition[1141]: disks: disks passed Mar 19 11:33:19.360948 ignition[1141]: Ignition finished successfully Mar 19 11:33:19.366152 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 19 11:33:19.368579 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 19 11:33:19.374408 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 19 11:33:19.376644 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:33:19.380327 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:33:19.382213 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:33:19.398410 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 19 11:33:19.437738 systemd-fsck[1149]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 19 11:33:19.444680 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 19 11:33:19.554243 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 19 11:33:19.637101 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 45bb9a4a-80dc-4ce4-9ca9-c4944d8ff0e6 r/w with ordered data mode. Quota mode: none. Mar 19 11:33:19.638158 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 19 11:33:19.640696 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 19 11:33:19.659292 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:33:19.674558 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 19 11:33:19.678725 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 19 11:33:19.678818 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 19 11:33:19.678868 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:33:19.688046 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 19 11:33:19.701445 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 19 11:33:19.717094 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1168) Mar 19 11:33:19.721013 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:33:19.721093 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:33:19.721122 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 19 11:33:19.735106 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 19 11:33:19.737469 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:33:20.076421 initrd-setup-root[1192]: cut: /sysroot/etc/passwd: No such file or directory Mar 19 11:33:20.096822 initrd-setup-root[1199]: cut: /sysroot/etc/group: No such file or directory Mar 19 11:33:20.105652 initrd-setup-root[1206]: cut: /sysroot/etc/shadow: No such file or directory Mar 19 11:33:20.114842 initrd-setup-root[1213]: cut: /sysroot/etc/gshadow: No such file or directory Mar 19 11:33:20.484104 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 19 11:33:20.493285 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 19 11:33:20.503468 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 19 11:33:20.522100 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:33:20.546399 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 19 11:33:20.559828 ignition[1280]: INFO : Ignition 2.20.0 Mar 19 11:33:20.559828 ignition[1280]: INFO : Stage: mount Mar 19 11:33:20.564610 ignition[1280]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:33:20.564610 ignition[1280]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 19 11:33:20.564610 ignition[1280]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 19 11:33:20.564610 ignition[1280]: INFO : PUT result: OK Mar 19 11:33:20.571122 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 19 11:33:20.580155 ignition[1280]: INFO : mount: mount passed Mar 19 11:33:20.580155 ignition[1280]: INFO : Ignition finished successfully Mar 19 11:33:20.583435 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 19 11:33:20.602575 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 19 11:33:20.622463 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:33:20.657732 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1292) Mar 19 11:33:20.657800 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:33:20.657826 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:33:20.660651 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 19 11:33:20.667113 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 19 11:33:20.669110 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:33:20.702396 ignition[1309]: INFO : Ignition 2.20.0 Mar 19 11:33:20.705513 ignition[1309]: INFO : Stage: files Mar 19 11:33:20.705513 ignition[1309]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:33:20.705513 ignition[1309]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 19 11:33:20.705513 ignition[1309]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 19 11:33:20.713037 ignition[1309]: INFO : PUT result: OK Mar 19 11:33:20.717525 ignition[1309]: DEBUG : files: compiled without relabeling support, skipping Mar 19 11:33:20.720362 ignition[1309]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 19 11:33:20.720362 ignition[1309]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 19 11:33:20.761694 ignition[1309]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 19 11:33:20.764480 ignition[1309]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 19 11:33:20.766850 ignition[1309]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 19 11:33:20.765753 unknown[1309]: wrote ssh authorized keys file for user: core Mar 19 11:33:20.782632 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 19 11:33:20.782632 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 19 11:33:20.888038 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 19 11:33:20.924188 systemd-networkd[1117]: eth0: Gained IPv6LL Mar 19 11:33:21.026988 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 19 11:33:21.026988 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 19 11:33:21.033664 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 19 11:33:21.355809 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 19 11:33:21.473158 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 19 11:33:21.476367 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 19 11:33:21.476367 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 19 11:33:21.476367 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:33:21.490482 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:33:21.490482 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:33:21.490482 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:33:21.490482 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:33:21.490482 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:33:21.490482 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:33:21.490482 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:33:21.490482 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 19 11:33:21.490482 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 19 11:33:21.490482 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 19 11:33:21.490482 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 19 11:33:21.949958 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 19 11:33:23.056484 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 19 11:33:23.056484 ignition[1309]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 19 11:33:23.080748 ignition[1309]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:33:23.084600 ignition[1309]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:33:23.084600 ignition[1309]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 19 11:33:23.090463 ignition[1309]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 19 11:33:23.090463 ignition[1309]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 19 11:33:23.095629 ignition[1309]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:33:23.095629 ignition[1309]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:33:23.101725 ignition[1309]: INFO : files: files passed Mar 19 11:33:23.101725 ignition[1309]: INFO : Ignition finished successfully Mar 19 11:33:23.106186 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 19 11:33:23.116435 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 19 11:33:23.128398 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 19 11:33:23.138699 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 19 11:33:23.141339 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 19 11:33:23.156170 initrd-setup-root-after-ignition[1337]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:33:23.159568 initrd-setup-root-after-ignition[1341]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:33:23.162689 initrd-setup-root-after-ignition[1337]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:33:23.167439 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:33:23.172727 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 19 11:33:23.182467 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 19 11:33:23.228717 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 19 11:33:23.228915 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 19 11:33:23.233943 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 19 11:33:23.236331 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 19 11:33:23.240310 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 19 11:33:23.265445 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 19 11:33:23.292788 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:33:23.300549 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 19 11:33:23.324897 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:33:23.329315 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:33:23.333900 systemd[1]: Stopped target timers.target - Timer Units. Mar 19 11:33:23.336139 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 19 11:33:23.336377 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:33:23.343722 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 19 11:33:23.347460 systemd[1]: Stopped target basic.target - Basic System. Mar 19 11:33:23.349303 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 19 11:33:23.351478 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:33:23.353822 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 19 11:33:23.363043 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 19 11:33:23.365507 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:33:23.369199 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 19 11:33:23.374924 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 19 11:33:23.376917 systemd[1]: Stopped target swap.target - Swaps. Mar 19 11:33:23.379137 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 19 11:33:23.379447 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:33:23.387471 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:33:23.390035 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:33:23.395687 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 19 11:33:23.397658 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:33:23.400694 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 19 11:33:23.400926 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 19 11:33:23.408389 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 19 11:33:23.408795 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:33:23.415986 systemd[1]: ignition-files.service: Deactivated successfully. Mar 19 11:33:23.416680 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 19 11:33:23.429480 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 19 11:33:23.434765 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 19 11:33:23.442197 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 19 11:33:23.445795 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:33:23.448383 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 19 11:33:23.448637 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:33:23.473671 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 19 11:33:23.475314 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 19 11:33:23.487663 ignition[1361]: INFO : Ignition 2.20.0 Mar 19 11:33:23.487663 ignition[1361]: INFO : Stage: umount Mar 19 11:33:23.491250 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:33:23.493911 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 19 11:33:23.493911 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 19 11:33:23.500002 ignition[1361]: INFO : PUT result: OK Mar 19 11:33:23.503912 ignition[1361]: INFO : umount: umount passed Mar 19 11:33:23.505683 ignition[1361]: INFO : Ignition finished successfully Mar 19 11:33:23.510624 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 19 11:33:23.512183 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 19 11:33:23.512425 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 19 11:33:23.516414 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 19 11:33:23.516566 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 19 11:33:23.520047 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 19 11:33:23.520192 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 19 11:33:23.522359 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 19 11:33:23.522449 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 19 11:33:23.529934 systemd[1]: Stopped target network.target - Network. Mar 19 11:33:23.540636 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 19 11:33:23.540758 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:33:23.553133 systemd[1]: Stopped target paths.target - Path Units. Mar 19 11:33:23.554830 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 19 11:33:23.558405 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:33:23.561352 systemd[1]: Stopped target slices.target - Slice Units. Mar 19 11:33:23.563048 systemd[1]: Stopped target sockets.target - Socket Units. Mar 19 11:33:23.566716 systemd[1]: iscsid.socket: Deactivated successfully. Mar 19 11:33:23.566797 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:33:23.568686 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 19 11:33:23.568755 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:33:23.570782 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 19 11:33:23.570873 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 19 11:33:23.577481 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 19 11:33:23.577567 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 19 11:33:23.579729 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 19 11:33:23.582990 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 19 11:33:23.600383 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 19 11:33:23.600639 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 19 11:33:23.619320 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 19 11:33:23.622282 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 19 11:33:23.622389 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:33:23.643486 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:33:23.646403 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 19 11:33:23.648242 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 19 11:33:23.657462 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 19 11:33:23.657993 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 19 11:33:23.659324 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 19 11:33:23.665715 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 19 11:33:23.665803 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:33:23.668140 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 19 11:33:23.668237 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 19 11:33:23.682743 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 19 11:33:23.689479 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 19 11:33:23.689733 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:33:23.696256 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 11:33:23.696365 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:33:23.698644 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 19 11:33:23.698731 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 19 11:33:23.703300 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:33:23.719437 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 19 11:33:23.731664 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 19 11:33:23.734893 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:33:23.739456 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 19 11:33:23.739590 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 19 11:33:23.745507 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 19 11:33:23.745584 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:33:23.749459 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 19 11:33:23.749677 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:33:23.752441 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 19 11:33:23.752534 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 19 11:33:23.764702 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:33:23.764808 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:33:23.778972 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 19 11:33:23.781930 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 19 11:33:23.782046 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:33:23.789702 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 19 11:33:23.789814 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:33:23.795308 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 19 11:33:23.795410 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:33:23.802088 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:33:23.802201 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:33:23.818175 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 19 11:33:23.818353 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 19 11:33:23.821017 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 19 11:33:23.821610 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 19 11:33:23.830851 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 19 11:33:23.843409 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 19 11:33:23.862528 systemd[1]: Switching root. Mar 19 11:33:23.924584 systemd-journald[252]: Journal stopped Mar 19 11:33:26.155216 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Mar 19 11:33:26.155347 kernel: SELinux: policy capability network_peer_controls=1 Mar 19 11:33:26.155390 kernel: SELinux: policy capability open_perms=1 Mar 19 11:33:26.155425 kernel: SELinux: policy capability extended_socket_class=1 Mar 19 11:33:26.155454 kernel: SELinux: policy capability always_check_network=0 Mar 19 11:33:26.155483 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 19 11:33:26.155512 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 19 11:33:26.155548 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 19 11:33:26.155578 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 19 11:33:26.155607 kernel: audit: type=1403 audit(1742384004.302:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 19 11:33:26.155646 systemd[1]: Successfully loaded SELinux policy in 87.795ms. Mar 19 11:33:26.155700 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.837ms. Mar 19 11:33:26.155736 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:33:26.155767 systemd[1]: Detected virtualization amazon. Mar 19 11:33:26.155795 systemd[1]: Detected architecture arm64. Mar 19 11:33:26.155826 systemd[1]: Detected first boot. Mar 19 11:33:26.155856 systemd[1]: Initializing machine ID from VM UUID. Mar 19 11:33:26.155888 zram_generator::config[1406]: No configuration found. Mar 19 11:33:26.155921 kernel: NET: Registered PF_VSOCK protocol family Mar 19 11:33:26.155950 systemd[1]: Populated /etc with preset unit settings. Mar 19 11:33:26.155984 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 19 11:33:26.156016 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 19 11:33:26.156048 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 19 11:33:26.158201 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 19 11:33:26.158255 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 19 11:33:26.158288 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 19 11:33:26.158320 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 19 11:33:26.158350 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 19 11:33:26.158388 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 19 11:33:26.158419 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 19 11:33:26.158450 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 19 11:33:26.158480 systemd[1]: Created slice user.slice - User and Session Slice. Mar 19 11:33:26.158508 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:33:26.158540 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:33:26.158569 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 19 11:33:26.158598 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 19 11:33:26.158628 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 19 11:33:26.158663 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:33:26.158694 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 19 11:33:26.158734 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:33:26.158763 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 19 11:33:26.158794 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 19 11:33:26.158823 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 19 11:33:26.158851 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 19 11:33:26.158885 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:33:26.158916 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:33:26.158949 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:33:26.158979 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:33:26.159009 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 19 11:33:26.159041 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 19 11:33:26.159092 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 19 11:33:26.159125 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:33:26.159176 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:33:26.159212 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:33:26.159247 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 19 11:33:26.159277 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 19 11:33:26.159305 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 19 11:33:26.159333 systemd[1]: Mounting media.mount - External Media Directory... Mar 19 11:33:26.159362 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 19 11:33:26.159393 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 19 11:33:26.159421 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 19 11:33:26.159452 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 19 11:33:26.159486 systemd[1]: Reached target machines.target - Containers. Mar 19 11:33:26.159517 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 19 11:33:26.159546 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:33:26.159575 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:33:26.159603 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 19 11:33:26.159634 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:33:26.159666 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:33:26.159694 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:33:26.159725 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 19 11:33:26.159758 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:33:26.159788 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 19 11:33:26.159816 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 19 11:33:26.159850 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 19 11:33:26.159879 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 19 11:33:26.159910 systemd[1]: Stopped systemd-fsck-usr.service. Mar 19 11:33:26.159941 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:33:26.159970 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:33:26.160006 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:33:26.160038 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 19 11:33:26.162151 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 19 11:33:26.162218 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 19 11:33:26.162248 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:33:26.162292 systemd[1]: verity-setup.service: Deactivated successfully. Mar 19 11:33:26.162321 systemd[1]: Stopped verity-setup.service. Mar 19 11:33:26.162350 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 19 11:33:26.162378 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 19 11:33:26.162406 systemd[1]: Mounted media.mount - External Media Directory. Mar 19 11:33:26.162436 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 19 11:33:26.162465 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 19 11:33:26.162494 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 19 11:33:26.162523 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:33:26.162557 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 19 11:33:26.162587 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 19 11:33:26.162615 kernel: loop: module loaded Mar 19 11:33:26.162647 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:33:26.162676 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:33:26.162711 kernel: fuse: init (API version 7.39) Mar 19 11:33:26.162740 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:33:26.162774 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:33:26.162803 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 19 11:33:26.162832 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 19 11:33:26.162860 kernel: ACPI: bus type drm_connector registered Mar 19 11:33:26.162888 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:33:26.162920 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:33:26.162949 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:33:26.162984 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:33:26.163013 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:33:26.163042 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 19 11:33:26.163102 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 19 11:33:26.163213 systemd-journald[1489]: Collecting audit messages is disabled. Mar 19 11:33:26.163271 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 19 11:33:26.163301 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 19 11:33:26.163336 systemd-journald[1489]: Journal started Mar 19 11:33:26.163384 systemd-journald[1489]: Runtime Journal (/run/log/journal/ec214e3cdffbb70788421b4293af4220) is 8M, max 75.3M, 67.3M free. Mar 19 11:33:25.555559 systemd[1]: Queued start job for default target multi-user.target. Mar 19 11:33:25.569342 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 19 11:33:25.570210 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 19 11:33:26.188236 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 19 11:33:26.188332 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 19 11:33:26.193150 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:33:26.203198 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 19 11:33:26.219924 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 19 11:33:26.229155 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 19 11:33:26.233612 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:33:26.251213 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 19 11:33:26.251309 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:33:26.265299 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 19 11:33:26.265386 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:33:26.280030 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:33:26.292689 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 19 11:33:26.306096 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 11:33:26.306188 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:33:26.313522 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 19 11:33:26.316343 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 19 11:33:26.322287 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 19 11:33:26.324776 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 19 11:33:26.327700 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 19 11:33:26.390192 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 19 11:33:26.405270 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 19 11:33:26.417782 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 19 11:33:26.424366 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 19 11:33:26.430929 kernel: loop0: detected capacity change from 0 to 53784 Mar 19 11:33:26.428021 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:33:26.441418 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 19 11:33:26.474972 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:33:26.481771 systemd-journald[1489]: Time spent on flushing to /var/log/journal/ec214e3cdffbb70788421b4293af4220 is 97.108ms for 930 entries. Mar 19 11:33:26.481771 systemd-journald[1489]: System Journal (/var/log/journal/ec214e3cdffbb70788421b4293af4220) is 8M, max 195.6M, 187.6M free. Mar 19 11:33:26.599806 systemd-journald[1489]: Received client request to flush runtime journal. Mar 19 11:33:26.599873 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 19 11:33:26.524111 udevadm[1551]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 19 11:33:26.531035 systemd-tmpfiles[1523]: ACLs are not supported, ignoring. Mar 19 11:33:26.531076 systemd-tmpfiles[1523]: ACLs are not supported, ignoring. Mar 19 11:33:26.561732 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:33:26.574443 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 19 11:33:26.585678 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 19 11:33:26.587195 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 19 11:33:26.605243 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 19 11:33:26.611212 kernel: loop1: detected capacity change from 0 to 113512 Mar 19 11:33:26.695297 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 19 11:33:26.705392 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:33:26.757774 systemd-tmpfiles[1563]: ACLs are not supported, ignoring. Mar 19 11:33:26.757814 systemd-tmpfiles[1563]: ACLs are not supported, ignoring. Mar 19 11:33:26.769393 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:33:26.791321 kernel: loop2: detected capacity change from 0 to 123192 Mar 19 11:33:26.914100 kernel: loop3: detected capacity change from 0 to 189592 Mar 19 11:33:27.137263 kernel: loop4: detected capacity change from 0 to 53784 Mar 19 11:33:27.156676 kernel: loop5: detected capacity change from 0 to 113512 Mar 19 11:33:27.156760 ldconfig[1518]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 19 11:33:27.161243 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 19 11:33:27.172145 kernel: loop6: detected capacity change from 0 to 123192 Mar 19 11:33:27.185374 kernel: loop7: detected capacity change from 0 to 189592 Mar 19 11:33:27.209775 (sd-merge)[1569]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 19 11:33:27.210817 (sd-merge)[1569]: Merged extensions into '/usr'. Mar 19 11:33:27.219682 systemd[1]: Reload requested from client PID 1522 ('systemd-sysext') (unit systemd-sysext.service)... Mar 19 11:33:27.220131 systemd[1]: Reloading... Mar 19 11:33:27.405302 zram_generator::config[1597]: No configuration found. Mar 19 11:33:27.680768 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:33:27.825420 systemd[1]: Reloading finished in 604 ms. Mar 19 11:33:27.849245 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 19 11:33:27.852413 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 19 11:33:27.869213 systemd[1]: Starting ensure-sysext.service... Mar 19 11:33:27.875525 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:33:27.882475 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:33:27.910015 systemd[1]: Reload requested from client PID 1649 ('systemctl') (unit ensure-sysext.service)... Mar 19 11:33:27.910044 systemd[1]: Reloading... Mar 19 11:33:27.921587 systemd-tmpfiles[1650]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 19 11:33:27.922137 systemd-tmpfiles[1650]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 19 11:33:27.924334 systemd-tmpfiles[1650]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 19 11:33:27.924899 systemd-tmpfiles[1650]: ACLs are not supported, ignoring. Mar 19 11:33:27.925038 systemd-tmpfiles[1650]: ACLs are not supported, ignoring. Mar 19 11:33:27.933736 systemd-tmpfiles[1650]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:33:27.933768 systemd-tmpfiles[1650]: Skipping /boot Mar 19 11:33:27.960374 systemd-tmpfiles[1650]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:33:27.960400 systemd-tmpfiles[1650]: Skipping /boot Mar 19 11:33:28.031695 systemd-udevd[1651]: Using default interface naming scheme 'v255'. Mar 19 11:33:28.076416 zram_generator::config[1685]: No configuration found. Mar 19 11:33:28.240102 (udev-worker)[1697]: loop7: Failed to create/update device symlink '/dev/disk/by-loop-inode/259:7-87', ignoring: No such file or directory Mar 19 11:33:28.316991 (udev-worker)[1701]: Network interface NamePolicy= disabled on kernel command line. Mar 19 11:33:28.457571 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:33:28.599130 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1697) Mar 19 11:33:28.634896 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 19 11:33:28.635604 systemd[1]: Reloading finished in 724 ms. Mar 19 11:33:28.650343 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:33:28.682553 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:33:28.746270 systemd[1]: Finished ensure-sysext.service. Mar 19 11:33:28.770496 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:33:28.776606 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 19 11:33:28.779536 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:33:28.784406 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:33:28.791435 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:33:28.795356 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:33:28.802371 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:33:28.804553 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:33:28.804647 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:33:28.810378 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 19 11:33:28.820305 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:33:28.828369 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:33:28.830363 systemd[1]: Reached target time-set.target - System Time Set. Mar 19 11:33:28.835602 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 19 11:33:28.842393 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:33:28.910986 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 19 11:33:28.927859 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:33:28.929152 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:33:28.938324 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:33:28.943210 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:33:28.945954 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:33:28.946447 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:33:28.950546 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:33:28.973979 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:33:28.975886 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:33:28.981322 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 19 11:33:28.993997 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:33:29.007350 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 19 11:33:29.011257 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 19 11:33:29.044215 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 19 11:33:29.061495 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 19 11:33:29.077788 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 19 11:33:29.091540 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 19 11:33:29.094710 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 19 11:33:29.114016 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 19 11:33:29.148624 augenrules[1891]: No rules Mar 19 11:33:29.150838 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:33:29.154397 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:33:29.160663 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 19 11:33:29.172091 lvm[1887]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:33:29.182739 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 19 11:33:29.193376 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:33:29.212566 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 19 11:33:29.218492 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 19 11:33:29.222240 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:33:29.235675 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 19 11:33:29.257255 lvm[1907]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:33:29.297156 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 19 11:33:29.360574 systemd-networkd[1848]: lo: Link UP Mar 19 11:33:29.361020 systemd-networkd[1848]: lo: Gained carrier Mar 19 11:33:29.364251 systemd-networkd[1848]: Enumeration completed Mar 19 11:33:29.364432 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:33:29.368928 systemd-networkd[1848]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:33:29.368938 systemd-networkd[1848]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:33:29.371816 systemd-networkd[1848]: eth0: Link UP Mar 19 11:33:29.372402 systemd-networkd[1848]: eth0: Gained carrier Mar 19 11:33:29.372523 systemd-networkd[1848]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:33:29.374379 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 19 11:33:29.383801 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 19 11:33:29.389194 systemd-networkd[1848]: eth0: DHCPv4 address 172.31.31.152/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 19 11:33:29.391574 systemd-resolved[1849]: Positive Trust Anchors: Mar 19 11:33:29.391631 systemd-resolved[1849]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:33:29.391698 systemd-resolved[1849]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:33:29.404395 systemd-resolved[1849]: Defaulting to hostname 'linux'. Mar 19 11:33:29.409939 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:33:29.412283 systemd[1]: Reached target network.target - Network. Mar 19 11:33:29.414133 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:33:29.416806 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:33:29.418903 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 19 11:33:29.421251 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 19 11:33:29.424089 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 19 11:33:29.426251 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 19 11:33:29.428489 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 19 11:33:29.430728 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 19 11:33:29.430882 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:33:29.432555 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:33:29.436678 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 19 11:33:29.441344 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 19 11:33:29.448379 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 19 11:33:29.452817 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 19 11:33:29.455313 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 19 11:33:29.471332 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 19 11:33:29.474281 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 19 11:33:29.479127 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 19 11:33:29.482152 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 19 11:33:29.485571 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:33:29.487963 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:33:29.489784 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:33:29.489941 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:33:29.497334 systemd[1]: Starting containerd.service - containerd container runtime... Mar 19 11:33:29.505386 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 19 11:33:29.510521 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 19 11:33:29.518458 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 19 11:33:29.531495 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 19 11:33:29.534045 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 19 11:33:29.541396 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 19 11:33:29.548397 systemd[1]: Started ntpd.service - Network Time Service. Mar 19 11:33:29.556310 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 19 11:33:29.564339 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 19 11:33:29.565146 jq[1921]: false Mar 19 11:33:29.569686 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 19 11:33:29.580492 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 19 11:33:29.592421 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 19 11:33:29.595845 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 19 11:33:29.598840 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 19 11:33:29.602671 systemd[1]: Starting update-engine.service - Update Engine... Mar 19 11:33:29.617355 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 19 11:33:29.623918 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 19 11:33:29.626218 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 19 11:33:29.661544 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 19 11:33:29.662031 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 19 11:33:29.671456 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 19 11:33:29.716520 jq[1933]: true Mar 19 11:33:29.739652 dbus-daemon[1920]: [system] SELinux support is enabled Mar 19 11:33:29.739942 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 19 11:33:29.747967 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 19 11:33:29.748100 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 19 11:33:29.751739 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 19 11:33:29.751782 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 19 11:33:29.754712 dbus-daemon[1920]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1848 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 19 11:33:29.756295 dbus-daemon[1920]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 19 11:33:29.775853 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 19 11:33:29.792392 systemd[1]: motdgen.service: Deactivated successfully. Mar 19 11:33:29.793172 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 19 11:33:29.814075 extend-filesystems[1922]: Found loop4 Mar 19 11:33:29.828690 extend-filesystems[1922]: Found loop5 Mar 19 11:33:29.828690 extend-filesystems[1922]: Found loop6 Mar 19 11:33:29.828690 extend-filesystems[1922]: Found loop7 Mar 19 11:33:29.828690 extend-filesystems[1922]: Found nvme0n1 Mar 19 11:33:29.828690 extend-filesystems[1922]: Found nvme0n1p1 Mar 19 11:33:29.828690 extend-filesystems[1922]: Found nvme0n1p2 Mar 19 11:33:29.828690 extend-filesystems[1922]: Found nvme0n1p3 Mar 19 11:33:29.828690 extend-filesystems[1922]: Found usr Mar 19 11:33:29.828690 extend-filesystems[1922]: Found nvme0n1p4 Mar 19 11:33:29.828690 extend-filesystems[1922]: Found nvme0n1p6 Mar 19 11:33:29.828690 extend-filesystems[1922]: Found nvme0n1p7 Mar 19 11:33:29.828690 extend-filesystems[1922]: Found nvme0n1p9 Mar 19 11:33:29.828690 extend-filesystems[1922]: Checking size of /dev/nvme0n1p9 Mar 19 11:33:29.877737 update_engine[1932]: I20250319 11:33:29.871816 1932 main.cc:92] Flatcar Update Engine starting Mar 19 11:33:29.828731 (ntainerd)[1956]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 19 11:33:29.881317 tar[1946]: linux-arm64/helm Mar 19 11:33:29.881666 jq[1952]: true Mar 19 11:33:29.900366 systemd[1]: Started update-engine.service - Update Engine. Mar 19 11:33:29.907454 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 19 11:33:29.910755 update_engine[1932]: I20250319 11:33:29.907795 1932 update_check_scheduler.cc:74] Next update check in 6m25s Mar 19 11:33:29.913043 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 19 11:33:29.928629 extend-filesystems[1922]: Resized partition /dev/nvme0n1p9 Mar 19 11:33:29.944480 ntpd[1924]: ntpd 4.2.8p17@1.4004-o Wed Mar 19 09:45:36 UTC 2025 (1): Starting Mar 19 11:33:29.954579 ntpd[1924]: 19 Mar 11:33:29 ntpd[1924]: ntpd 4.2.8p17@1.4004-o Wed Mar 19 09:45:36 UTC 2025 (1): Starting Mar 19 11:33:29.954579 ntpd[1924]: 19 Mar 11:33:29 ntpd[1924]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 19 11:33:29.954579 ntpd[1924]: 19 Mar 11:33:29 ntpd[1924]: ---------------------------------------------------- Mar 19 11:33:29.954579 ntpd[1924]: 19 Mar 11:33:29 ntpd[1924]: ntp-4 is maintained by Network Time Foundation, Mar 19 11:33:29.954579 ntpd[1924]: 19 Mar 11:33:29 ntpd[1924]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 19 11:33:29.954579 ntpd[1924]: 19 Mar 11:33:29 ntpd[1924]: corporation. Support and training for ntp-4 are Mar 19 11:33:29.954579 ntpd[1924]: 19 Mar 11:33:29 ntpd[1924]: available at https://www.nwtime.org/support Mar 19 11:33:29.954579 ntpd[1924]: 19 Mar 11:33:29 ntpd[1924]: ---------------------------------------------------- Mar 19 11:33:29.954579 ntpd[1924]: 19 Mar 11:33:29 ntpd[1924]: proto: precision = 0.096 usec (-23) Mar 19 11:33:29.944533 ntpd[1924]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 19 11:33:29.944552 ntpd[1924]: ---------------------------------------------------- Mar 19 11:33:29.944570 ntpd[1924]: ntp-4 is maintained by Network Time Foundation, Mar 19 11:33:29.944588 ntpd[1924]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 19 11:33:29.944606 ntpd[1924]: corporation. Support and training for ntp-4 are Mar 19 11:33:29.944623 ntpd[1924]: available at https://www.nwtime.org/support Mar 19 11:33:29.944641 ntpd[1924]: ---------------------------------------------------- Mar 19 11:33:29.949970 ntpd[1924]: proto: precision = 0.096 usec (-23) Mar 19 11:33:29.968632 extend-filesystems[1975]: resize2fs 1.47.1 (20-May-2024) Mar 19 11:33:29.974354 ntpd[1924]: 19 Mar 11:33:29 ntpd[1924]: basedate set to 2025-03-07 Mar 19 11:33:29.974354 ntpd[1924]: 19 Mar 11:33:29 ntpd[1924]: gps base set to 2025-03-09 (week 2357) Mar 19 11:33:29.974354 ntpd[1924]: 19 Mar 11:33:29 ntpd[1924]: Listen and drop on 0 v6wildcard [::]:123 Mar 19 11:33:29.974354 ntpd[1924]: 19 Mar 11:33:29 ntpd[1924]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 19 11:33:29.958031 ntpd[1924]: basedate set to 2025-03-07 Mar 19 11:33:29.974614 coreos-metadata[1919]: Mar 19 11:33:29.971 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 19 11:33:29.974614 coreos-metadata[1919]: Mar 19 11:33:29.971 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 19 11:33:29.974614 coreos-metadata[1919]: Mar 19 11:33:29.971 INFO Fetch successful Mar 19 11:33:29.974614 coreos-metadata[1919]: Mar 19 11:33:29.971 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 19 11:33:29.974614 coreos-metadata[1919]: Mar 19 11:33:29.971 INFO Fetch successful Mar 19 11:33:29.974614 coreos-metadata[1919]: Mar 19 11:33:29.971 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 19 11:33:29.974614 coreos-metadata[1919]: Mar 19 11:33:29.971 INFO Fetch successful Mar 19 11:33:29.974614 coreos-metadata[1919]: Mar 19 11:33:29.971 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 19 11:33:29.974614 coreos-metadata[1919]: Mar 19 11:33:29.971 INFO Fetch successful Mar 19 11:33:29.974614 coreos-metadata[1919]: Mar 19 11:33:29.971 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 19 11:33:29.974614 coreos-metadata[1919]: Mar 19 11:33:29.971 INFO Fetch failed with 404: resource not found Mar 19 11:33:29.974614 coreos-metadata[1919]: Mar 19 11:33:29.971 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 19 11:33:29.974614 coreos-metadata[1919]: Mar 19 11:33:29.971 INFO Fetch successful Mar 19 11:33:29.974614 coreos-metadata[1919]: Mar 19 11:33:29.971 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 19 11:33:29.974614 coreos-metadata[1919]: Mar 19 11:33:29.971 INFO Fetch successful Mar 19 11:33:29.974614 coreos-metadata[1919]: Mar 19 11:33:29.971 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 19 11:33:29.974614 coreos-metadata[1919]: Mar 19 11:33:29.971 INFO Fetch successful Mar 19 11:33:29.974614 coreos-metadata[1919]: Mar 19 11:33:29.971 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 19 11:33:29.974614 coreos-metadata[1919]: Mar 19 11:33:29.971 INFO Fetch successful Mar 19 11:33:29.974614 coreos-metadata[1919]: Mar 19 11:33:29.971 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 19 11:33:29.974614 coreos-metadata[1919]: Mar 19 11:33:29.971 INFO Fetch successful Mar 19 11:33:29.958082 ntpd[1924]: gps base set to 2025-03-09 (week 2357) Mar 19 11:33:29.973931 ntpd[1924]: Listen and drop on 0 v6wildcard [::]:123 Mar 19 11:33:29.974018 ntpd[1924]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 19 11:33:29.994793 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Mar 19 11:33:29.994944 ntpd[1924]: 19 Mar 11:33:29 ntpd[1924]: Listen normally on 2 lo 127.0.0.1:123 Mar 19 11:33:29.994944 ntpd[1924]: 19 Mar 11:33:29 ntpd[1924]: Listen normally on 3 eth0 172.31.31.152:123 Mar 19 11:33:29.994944 ntpd[1924]: 19 Mar 11:33:29 ntpd[1924]: Listen normally on 4 lo [::1]:123 Mar 19 11:33:29.994944 ntpd[1924]: 19 Mar 11:33:29 ntpd[1924]: bind(21) AF_INET6 fe80::41f:6bff:fe94:810b%2#123 flags 0x11 failed: Cannot assign requested address Mar 19 11:33:29.994944 ntpd[1924]: 19 Mar 11:33:29 ntpd[1924]: unable to create socket on eth0 (5) for fe80::41f:6bff:fe94:810b%2#123 Mar 19 11:33:29.994944 ntpd[1924]: 19 Mar 11:33:29 ntpd[1924]: failed to init interface for address fe80::41f:6bff:fe94:810b%2 Mar 19 11:33:29.994944 ntpd[1924]: 19 Mar 11:33:29 ntpd[1924]: Listening on routing socket on fd #21 for interface updates Mar 19 11:33:29.979653 ntpd[1924]: Listen normally on 2 lo 127.0.0.1:123 Mar 19 11:33:29.979724 ntpd[1924]: Listen normally on 3 eth0 172.31.31.152:123 Mar 19 11:33:29.979788 ntpd[1924]: Listen normally on 4 lo [::1]:123 Mar 19 11:33:29.979864 ntpd[1924]: bind(21) AF_INET6 fe80::41f:6bff:fe94:810b%2#123 flags 0x11 failed: Cannot assign requested address Mar 19 11:33:29.979901 ntpd[1924]: unable to create socket on eth0 (5) for fe80::41f:6bff:fe94:810b%2#123 Mar 19 11:33:29.979927 ntpd[1924]: failed to init interface for address fe80::41f:6bff:fe94:810b%2 Mar 19 11:33:29.980003 ntpd[1924]: Listening on routing socket on fd #21 for interface updates Mar 19 11:33:30.005248 ntpd[1924]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 19 11:33:30.005307 ntpd[1924]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 19 11:33:30.005429 ntpd[1924]: 19 Mar 11:33:30 ntpd[1924]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 19 11:33:30.005429 ntpd[1924]: 19 Mar 11:33:30 ntpd[1924]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 19 11:33:30.110160 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 19 11:33:30.112576 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 19 11:33:30.122037 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Mar 19 11:33:30.144651 extend-filesystems[1975]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 19 11:33:30.144651 extend-filesystems[1975]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 19 11:33:30.144651 extend-filesystems[1975]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Mar 19 11:33:30.155177 extend-filesystems[1922]: Resized filesystem in /dev/nvme0n1p9 Mar 19 11:33:30.147774 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 19 11:33:30.148224 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 19 11:33:30.194591 bash[2001]: Updated "/home/core/.ssh/authorized_keys" Mar 19 11:33:30.194055 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 19 11:33:30.225117 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1697) Mar 19 11:33:30.330933 systemd[1]: Starting sshkeys.service... Mar 19 11:33:30.342679 systemd-logind[1931]: Watching system buttons on /dev/input/event0 (Power Button) Mar 19 11:33:30.342735 systemd-logind[1931]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 19 11:33:30.356490 systemd-logind[1931]: New seat seat0. Mar 19 11:33:30.365831 systemd[1]: Started systemd-logind.service - User Login Management. Mar 19 11:33:30.402839 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 19 11:33:30.415249 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 19 11:33:30.459786 containerd[1956]: time="2025-03-19T11:33:30.457375439Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 19 11:33:30.488779 locksmithd[1972]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 19 11:33:30.531787 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 19 11:33:30.545324 dbus-daemon[1920]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 19 11:33:30.577748 dbus-daemon[1920]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1961 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 19 11:33:30.609681 systemd[1]: Starting polkit.service - Authorization Manager... Mar 19 11:33:30.670410 polkitd[2087]: Started polkitd version 121 Mar 19 11:33:30.696303 polkitd[2087]: Loading rules from directory /etc/polkit-1/rules.d Mar 19 11:33:30.696442 polkitd[2087]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 19 11:33:30.697509 containerd[1956]: time="2025-03-19T11:33:30.697402788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:33:30.697898 polkitd[2087]: Finished loading, compiling and executing 2 rules Mar 19 11:33:30.701279 dbus-daemon[1920]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 19 11:33:30.701544 systemd[1]: Started polkit.service - Authorization Manager. Mar 19 11:33:30.703652 polkitd[2087]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 19 11:33:30.716121 containerd[1956]: time="2025-03-19T11:33:30.715962108Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:33:30.717120 containerd[1956]: time="2025-03-19T11:33:30.716252748Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 19 11:33:30.717120 containerd[1956]: time="2025-03-19T11:33:30.716302512Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 19 11:33:30.717120 containerd[1956]: time="2025-03-19T11:33:30.716604696Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 19 11:33:30.717120 containerd[1956]: time="2025-03-19T11:33:30.716647764Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 19 11:33:30.717120 containerd[1956]: time="2025-03-19T11:33:30.716776512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:33:30.717120 containerd[1956]: time="2025-03-19T11:33:30.716804088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:33:30.723123 containerd[1956]: time="2025-03-19T11:33:30.718330536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:33:30.723123 containerd[1956]: time="2025-03-19T11:33:30.718384344Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 19 11:33:30.723123 containerd[1956]: time="2025-03-19T11:33:30.718418160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:33:30.723123 containerd[1956]: time="2025-03-19T11:33:30.718441788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 19 11:33:30.723123 containerd[1956]: time="2025-03-19T11:33:30.718646424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:33:30.726653 containerd[1956]: time="2025-03-19T11:33:30.726599832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:33:30.727224 containerd[1956]: time="2025-03-19T11:33:30.727178436Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:33:30.729014 containerd[1956]: time="2025-03-19T11:33:30.728585820Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 19 11:33:30.729014 containerd[1956]: time="2025-03-19T11:33:30.728844180Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 19 11:33:30.729014 containerd[1956]: time="2025-03-19T11:33:30.728951088Z" level=info msg="metadata content store policy set" policy=shared Mar 19 11:33:30.747672 coreos-metadata[2047]: Mar 19 11:33:30.747 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 19 11:33:30.747672 coreos-metadata[2047]: Mar 19 11:33:30.747 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 19 11:33:30.747672 coreos-metadata[2047]: Mar 19 11:33:30.747 INFO Fetch successful Mar 19 11:33:30.747672 coreos-metadata[2047]: Mar 19 11:33:30.747 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 19 11:33:30.747672 coreos-metadata[2047]: Mar 19 11:33:30.747 INFO Fetch successful Mar 19 11:33:30.750308 containerd[1956]: time="2025-03-19T11:33:30.748531812Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 19 11:33:30.750308 containerd[1956]: time="2025-03-19T11:33:30.748705776Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 19 11:33:30.750308 containerd[1956]: time="2025-03-19T11:33:30.748809624Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 19 11:33:30.750308 containerd[1956]: time="2025-03-19T11:33:30.748852512Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 19 11:33:30.750308 containerd[1956]: time="2025-03-19T11:33:30.748888944Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 19 11:33:30.750308 containerd[1956]: time="2025-03-19T11:33:30.749168664Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 19 11:33:30.750202 unknown[2047]: wrote ssh authorized keys file for user: core Mar 19 11:33:30.760511 containerd[1956]: time="2025-03-19T11:33:30.759602496Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 19 11:33:30.760511 containerd[1956]: time="2025-03-19T11:33:30.759874176Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 19 11:33:30.760511 containerd[1956]: time="2025-03-19T11:33:30.759907668Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 19 11:33:30.760511 containerd[1956]: time="2025-03-19T11:33:30.759939984Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 19 11:33:30.760511 containerd[1956]: time="2025-03-19T11:33:30.759972012Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 19 11:33:30.760511 containerd[1956]: time="2025-03-19T11:33:30.760011948Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 19 11:33:30.760511 containerd[1956]: time="2025-03-19T11:33:30.760043376Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 19 11:33:30.782539 systemd-hostnamed[1961]: Hostname set to (transient) Mar 19 11:33:30.784560 containerd[1956]: time="2025-03-19T11:33:30.772319472Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 19 11:33:30.784560 containerd[1956]: time="2025-03-19T11:33:30.772399776Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 19 11:33:30.784560 containerd[1956]: time="2025-03-19T11:33:30.772436856Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 19 11:33:30.784560 containerd[1956]: time="2025-03-19T11:33:30.772467432Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 19 11:33:30.784560 containerd[1956]: time="2025-03-19T11:33:30.772494948Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 19 11:33:30.784560 containerd[1956]: time="2025-03-19T11:33:30.772541472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 19 11:33:30.784560 containerd[1956]: time="2025-03-19T11:33:30.772573680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 19 11:33:30.784560 containerd[1956]: time="2025-03-19T11:33:30.772602672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 19 11:33:30.784560 containerd[1956]: time="2025-03-19T11:33:30.772633656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 19 11:33:30.784560 containerd[1956]: time="2025-03-19T11:33:30.772662384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 19 11:33:30.784560 containerd[1956]: time="2025-03-19T11:33:30.772694460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 19 11:33:30.784560 containerd[1956]: time="2025-03-19T11:33:30.772722312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 19 11:33:30.784560 containerd[1956]: time="2025-03-19T11:33:30.772753812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 19 11:33:30.784560 containerd[1956]: time="2025-03-19T11:33:30.772784856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 19 11:33:30.783171 systemd-resolved[1849]: System hostname changed to 'ip-172-31-31-152'. Mar 19 11:33:30.804787 containerd[1956]: time="2025-03-19T11:33:30.772818228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 19 11:33:30.804787 containerd[1956]: time="2025-03-19T11:33:30.772846656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 19 11:33:30.804787 containerd[1956]: time="2025-03-19T11:33:30.772874820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 19 11:33:30.804787 containerd[1956]: time="2025-03-19T11:33:30.772908144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 19 11:33:30.804787 containerd[1956]: time="2025-03-19T11:33:30.772939788Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 19 11:33:30.804787 containerd[1956]: time="2025-03-19T11:33:30.772985340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 19 11:33:30.804787 containerd[1956]: time="2025-03-19T11:33:30.773020440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 19 11:33:30.804787 containerd[1956]: time="2025-03-19T11:33:30.773049648Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 19 11:33:30.804787 containerd[1956]: time="2025-03-19T11:33:30.781313208Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 19 11:33:30.804787 containerd[1956]: time="2025-03-19T11:33:30.781374852Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 19 11:33:30.804787 containerd[1956]: time="2025-03-19T11:33:30.781402764Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 19 11:33:30.804787 containerd[1956]: time="2025-03-19T11:33:30.781444044Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 19 11:33:30.804787 containerd[1956]: time="2025-03-19T11:33:30.781470900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 19 11:33:30.806487 containerd[1956]: time="2025-03-19T11:33:30.781500684Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 19 11:33:30.806487 containerd[1956]: time="2025-03-19T11:33:30.781525764Z" level=info msg="NRI interface is disabled by configuration." Mar 19 11:33:30.806487 containerd[1956]: time="2025-03-19T11:33:30.781551312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 19 11:33:30.806789 containerd[1956]: time="2025-03-19T11:33:30.782055180Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 19 11:33:30.806789 containerd[1956]: time="2025-03-19T11:33:30.787052148Z" level=info msg="Connect containerd service" Mar 19 11:33:30.806789 containerd[1956]: time="2025-03-19T11:33:30.789877356Z" level=info msg="using legacy CRI server" Mar 19 11:33:30.806789 containerd[1956]: time="2025-03-19T11:33:30.789902004Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 19 11:33:30.806789 containerd[1956]: time="2025-03-19T11:33:30.790181808Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 19 11:33:30.806789 containerd[1956]: time="2025-03-19T11:33:30.797759304Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 11:33:30.806789 containerd[1956]: time="2025-03-19T11:33:30.797970156Z" level=info msg="Start subscribing containerd event" Mar 19 11:33:30.806789 containerd[1956]: time="2025-03-19T11:33:30.798045876Z" level=info msg="Start recovering state" Mar 19 11:33:30.806789 containerd[1956]: time="2025-03-19T11:33:30.798183372Z" level=info msg="Start event monitor" Mar 19 11:33:30.806789 containerd[1956]: time="2025-03-19T11:33:30.798207144Z" level=info msg="Start snapshots syncer" Mar 19 11:33:30.806789 containerd[1956]: time="2025-03-19T11:33:30.798229332Z" level=info msg="Start cni network conf syncer for default" Mar 19 11:33:30.806789 containerd[1956]: time="2025-03-19T11:33:30.798249768Z" level=info msg="Start streaming server" Mar 19 11:33:30.806789 containerd[1956]: time="2025-03-19T11:33:30.804741888Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 19 11:33:30.806789 containerd[1956]: time="2025-03-19T11:33:30.804863724Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 19 11:33:30.806789 containerd[1956]: time="2025-03-19T11:33:30.804960468Z" level=info msg="containerd successfully booted in 0.349992s" Mar 19 11:33:30.826822 update-ssh-keys[2119]: Updated "/home/core/.ssh/authorized_keys" Mar 19 11:33:30.816517 systemd[1]: Started containerd.service - containerd container runtime. Mar 19 11:33:30.822135 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 19 11:33:30.832653 systemd[1]: Finished sshkeys.service. Mar 19 11:33:30.945227 ntpd[1924]: bind(24) AF_INET6 fe80::41f:6bff:fe94:810b%2#123 flags 0x11 failed: Cannot assign requested address Mar 19 11:33:30.945297 ntpd[1924]: unable to create socket on eth0 (6) for fe80::41f:6bff:fe94:810b%2#123 Mar 19 11:33:30.945708 ntpd[1924]: 19 Mar 11:33:30 ntpd[1924]: bind(24) AF_INET6 fe80::41f:6bff:fe94:810b%2#123 flags 0x11 failed: Cannot assign requested address Mar 19 11:33:30.945708 ntpd[1924]: 19 Mar 11:33:30 ntpd[1924]: unable to create socket on eth0 (6) for fe80::41f:6bff:fe94:810b%2#123 Mar 19 11:33:30.945708 ntpd[1924]: 19 Mar 11:33:30 ntpd[1924]: failed to init interface for address fe80::41f:6bff:fe94:810b%2 Mar 19 11:33:30.945325 ntpd[1924]: failed to init interface for address fe80::41f:6bff:fe94:810b%2 Mar 19 11:33:30.972228 systemd-networkd[1848]: eth0: Gained IPv6LL Mar 19 11:33:30.982740 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 19 11:33:30.986021 systemd[1]: Reached target network-online.target - Network is Online. Mar 19 11:33:30.997575 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 19 11:33:31.012655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:33:31.018716 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 19 11:33:31.144550 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 19 11:33:31.187209 amazon-ssm-agent[2125]: Initializing new seelog logger Mar 19 11:33:31.192171 amazon-ssm-agent[2125]: New Seelog Logger Creation Complete Mar 19 11:33:31.192171 amazon-ssm-agent[2125]: 2025/03/19 11:33:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 19 11:33:31.192171 amazon-ssm-agent[2125]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 19 11:33:31.192171 amazon-ssm-agent[2125]: 2025/03/19 11:33:31 processing appconfig overrides Mar 19 11:33:31.192171 amazon-ssm-agent[2125]: 2025/03/19 11:33:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 19 11:33:31.192171 amazon-ssm-agent[2125]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 19 11:33:31.192171 amazon-ssm-agent[2125]: 2025/03/19 11:33:31 processing appconfig overrides Mar 19 11:33:31.192793 amazon-ssm-agent[2125]: 2025/03/19 11:33:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 19 11:33:31.196274 amazon-ssm-agent[2125]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 19 11:33:31.196274 amazon-ssm-agent[2125]: 2025/03/19 11:33:31 processing appconfig overrides Mar 19 11:33:31.196274 amazon-ssm-agent[2125]: 2025-03-19 11:33:31 INFO Proxy environment variables: Mar 19 11:33:31.198376 amazon-ssm-agent[2125]: 2025/03/19 11:33:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 19 11:33:31.202115 amazon-ssm-agent[2125]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 19 11:33:31.202115 amazon-ssm-agent[2125]: 2025/03/19 11:33:31 processing appconfig overrides Mar 19 11:33:31.295456 amazon-ssm-agent[2125]: 2025-03-19 11:33:31 INFO https_proxy: Mar 19 11:33:31.395336 amazon-ssm-agent[2125]: 2025-03-19 11:33:31 INFO http_proxy: Mar 19 11:33:31.494019 amazon-ssm-agent[2125]: 2025-03-19 11:33:31 INFO no_proxy: Mar 19 11:33:31.592841 amazon-ssm-agent[2125]: 2025-03-19 11:33:31 INFO Checking if agent identity type OnPrem can be assumed Mar 19 11:33:31.691998 amazon-ssm-agent[2125]: 2025-03-19 11:33:31 INFO Checking if agent identity type EC2 can be assumed Mar 19 11:33:31.739129 tar[1946]: linux-arm64/LICENSE Mar 19 11:33:31.739129 tar[1946]: linux-arm64/README.md Mar 19 11:33:31.784300 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 19 11:33:31.792544 amazon-ssm-agent[2125]: 2025-03-19 11:33:31 INFO Agent will take identity from EC2 Mar 19 11:33:31.893199 amazon-ssm-agent[2125]: 2025-03-19 11:33:31 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 19 11:33:31.992142 amazon-ssm-agent[2125]: 2025-03-19 11:33:31 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 19 11:33:32.006440 amazon-ssm-agent[2125]: 2025-03-19 11:33:31 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 19 11:33:32.006440 amazon-ssm-agent[2125]: 2025-03-19 11:33:31 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 19 11:33:32.006440 amazon-ssm-agent[2125]: 2025-03-19 11:33:31 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 19 11:33:32.006440 amazon-ssm-agent[2125]: 2025-03-19 11:33:31 INFO [amazon-ssm-agent] Starting Core Agent Mar 19 11:33:32.006440 amazon-ssm-agent[2125]: 2025-03-19 11:33:31 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 19 11:33:32.006440 amazon-ssm-agent[2125]: 2025-03-19 11:33:31 INFO [Registrar] Starting registrar module Mar 19 11:33:32.006440 amazon-ssm-agent[2125]: 2025-03-19 11:33:31 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 19 11:33:32.006440 amazon-ssm-agent[2125]: 2025-03-19 11:33:31 INFO [EC2Identity] EC2 registration was successful. Mar 19 11:33:32.006440 amazon-ssm-agent[2125]: 2025-03-19 11:33:31 INFO [CredentialRefresher] credentialRefresher has started Mar 19 11:33:32.006440 amazon-ssm-agent[2125]: 2025-03-19 11:33:31 INFO [CredentialRefresher] Starting credentials refresher loop Mar 19 11:33:32.006440 amazon-ssm-agent[2125]: 2025-03-19 11:33:32 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 19 11:33:32.063875 sshd_keygen[1960]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 19 11:33:32.090746 amazon-ssm-agent[2125]: 2025-03-19 11:33:32 INFO [CredentialRefresher] Next credential rotation will be in 30.141657156466668 minutes Mar 19 11:33:32.112149 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 19 11:33:32.120521 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 19 11:33:32.132258 systemd[1]: Started sshd@0-172.31.31.152:22-139.178.68.195:48362.service - OpenSSH per-connection server daemon (139.178.68.195:48362). Mar 19 11:33:32.154496 systemd[1]: issuegen.service: Deactivated successfully. Mar 19 11:33:32.157197 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 19 11:33:32.170394 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 19 11:33:32.209807 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 19 11:33:32.219697 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 19 11:33:32.228744 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 19 11:33:32.231222 systemd[1]: Reached target getty.target - Login Prompts. Mar 19 11:33:32.357236 sshd[2156]: Accepted publickey for core from 139.178.68.195 port 48362 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:33:32.359982 sshd-session[2156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:32.373436 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 19 11:33:32.383849 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 19 11:33:32.404295 systemd-logind[1931]: New session 1 of user core. Mar 19 11:33:32.422149 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 19 11:33:32.437567 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 19 11:33:32.454012 (systemd)[2167]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 19 11:33:32.459663 systemd-logind[1931]: New session c1 of user core. Mar 19 11:33:32.752051 systemd[2167]: Queued start job for default target default.target. Mar 19 11:33:32.759160 systemd[2167]: Created slice app.slice - User Application Slice. Mar 19 11:33:32.759223 systemd[2167]: Reached target paths.target - Paths. Mar 19 11:33:32.759304 systemd[2167]: Reached target timers.target - Timers. Mar 19 11:33:32.761947 systemd[2167]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 19 11:33:32.780914 systemd[2167]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 19 11:33:32.781415 systemd[2167]: Reached target sockets.target - Sockets. Mar 19 11:33:32.781624 systemd[2167]: Reached target basic.target - Basic System. Mar 19 11:33:32.781859 systemd[2167]: Reached target default.target - Main User Target. Mar 19 11:33:32.782026 systemd[2167]: Startup finished in 310ms. Mar 19 11:33:32.782481 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 19 11:33:32.794358 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 19 11:33:32.958587 systemd[1]: Started sshd@1-172.31.31.152:22-139.178.68.195:48374.service - OpenSSH per-connection server daemon (139.178.68.195:48374). Mar 19 11:33:33.035161 amazon-ssm-agent[2125]: 2025-03-19 11:33:33 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 19 11:33:33.135264 amazon-ssm-agent[2125]: 2025-03-19 11:33:33 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2181) started Mar 19 11:33:33.156922 sshd[2178]: Accepted publickey for core from 139.178.68.195 port 48374 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:33:33.158623 sshd-session[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:33.173406 systemd-logind[1931]: New session 2 of user core. Mar 19 11:33:33.178361 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 19 11:33:33.235724 amazon-ssm-agent[2125]: 2025-03-19 11:33:33 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 19 11:33:33.309465 sshd[2187]: Connection closed by 139.178.68.195 port 48374 Mar 19 11:33:33.310273 sshd-session[2178]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:33.317383 systemd[1]: sshd@1-172.31.31.152:22-139.178.68.195:48374.service: Deactivated successfully. Mar 19 11:33:33.321372 systemd[1]: session-2.scope: Deactivated successfully. Mar 19 11:33:33.325403 systemd-logind[1931]: Session 2 logged out. Waiting for processes to exit. Mar 19 11:33:33.327192 systemd-logind[1931]: Removed session 2. Mar 19 11:33:33.349567 systemd[1]: Started sshd@2-172.31.31.152:22-139.178.68.195:48388.service - OpenSSH per-connection server daemon (139.178.68.195:48388). Mar 19 11:33:33.531807 sshd[2197]: Accepted publickey for core from 139.178.68.195 port 48388 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:33:33.534789 sshd-session[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:33.546546 systemd-logind[1931]: New session 3 of user core. Mar 19 11:33:33.552591 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 19 11:33:33.600162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:33:33.603383 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 19 11:33:33.606347 systemd[1]: Startup finished in 1.071s (kernel) + 9.483s (initrd) + 9.390s (userspace) = 19.945s. Mar 19 11:33:33.614238 (kubelet)[2205]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:33:33.679932 sshd[2199]: Connection closed by 139.178.68.195 port 48388 Mar 19 11:33:33.682007 sshd-session[2197]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:33.687686 systemd[1]: sshd@2-172.31.31.152:22-139.178.68.195:48388.service: Deactivated successfully. Mar 19 11:33:33.691876 systemd[1]: session-3.scope: Deactivated successfully. Mar 19 11:33:33.693539 systemd-logind[1931]: Session 3 logged out. Waiting for processes to exit. Mar 19 11:33:33.695696 systemd-logind[1931]: Removed session 3. Mar 19 11:33:33.945510 ntpd[1924]: Listen normally on 7 eth0 [fe80::41f:6bff:fe94:810b%2]:123 Mar 19 11:33:33.946368 ntpd[1924]: 19 Mar 11:33:33 ntpd[1924]: Listen normally on 7 eth0 [fe80::41f:6bff:fe94:810b%2]:123 Mar 19 11:33:34.799525 kubelet[2205]: E0319 11:33:34.799447 2205 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:33:34.803958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:33:34.804359 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:33:34.805157 systemd[1]: kubelet.service: Consumed 1.233s CPU time, 231.8M memory peak. Mar 19 11:33:36.706547 systemd-resolved[1849]: Clock change detected. Flushing caches. Mar 19 11:33:43.486834 systemd[1]: Started sshd@3-172.31.31.152:22-139.178.68.195:40768.service - OpenSSH per-connection server daemon (139.178.68.195:40768). Mar 19 11:33:43.671280 sshd[2221]: Accepted publickey for core from 139.178.68.195 port 40768 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:33:43.673733 sshd-session[2221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:43.681397 systemd-logind[1931]: New session 4 of user core. Mar 19 11:33:43.692589 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 19 11:33:43.818385 sshd[2223]: Connection closed by 139.178.68.195 port 40768 Mar 19 11:33:43.819448 sshd-session[2221]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:43.825220 systemd[1]: sshd@3-172.31.31.152:22-139.178.68.195:40768.service: Deactivated successfully. Mar 19 11:33:43.829296 systemd[1]: session-4.scope: Deactivated successfully. Mar 19 11:33:43.830727 systemd-logind[1931]: Session 4 logged out. Waiting for processes to exit. Mar 19 11:33:43.832584 systemd-logind[1931]: Removed session 4. Mar 19 11:33:43.859840 systemd[1]: Started sshd@4-172.31.31.152:22-139.178.68.195:40774.service - OpenSSH per-connection server daemon (139.178.68.195:40774). Mar 19 11:33:44.036375 sshd[2229]: Accepted publickey for core from 139.178.68.195 port 40774 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:33:44.039225 sshd-session[2229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:44.047831 systemd-logind[1931]: New session 5 of user core. Mar 19 11:33:44.058613 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 19 11:33:44.174481 sshd[2231]: Connection closed by 139.178.68.195 port 40774 Mar 19 11:33:44.175328 sshd-session[2229]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:44.182006 systemd[1]: sshd@4-172.31.31.152:22-139.178.68.195:40774.service: Deactivated successfully. Mar 19 11:33:44.186313 systemd[1]: session-5.scope: Deactivated successfully. Mar 19 11:33:44.187922 systemd-logind[1931]: Session 5 logged out. Waiting for processes to exit. Mar 19 11:33:44.189791 systemd-logind[1931]: Removed session 5. Mar 19 11:33:44.213858 systemd[1]: Started sshd@5-172.31.31.152:22-139.178.68.195:40782.service - OpenSSH per-connection server daemon (139.178.68.195:40782). Mar 19 11:33:44.401510 sshd[2237]: Accepted publickey for core from 139.178.68.195 port 40782 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:33:44.403877 sshd-session[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:44.412697 systemd-logind[1931]: New session 6 of user core. Mar 19 11:33:44.422662 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 19 11:33:44.546215 sshd[2239]: Connection closed by 139.178.68.195 port 40782 Mar 19 11:33:44.547028 sshd-session[2237]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:44.552467 systemd-logind[1931]: Session 6 logged out. Waiting for processes to exit. Mar 19 11:33:44.553903 systemd[1]: sshd@5-172.31.31.152:22-139.178.68.195:40782.service: Deactivated successfully. Mar 19 11:33:44.558679 systemd[1]: session-6.scope: Deactivated successfully. Mar 19 11:33:44.563425 systemd-logind[1931]: Removed session 6. Mar 19 11:33:44.567034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 19 11:33:44.573739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:33:44.593008 systemd[1]: Started sshd@6-172.31.31.152:22-139.178.68.195:40792.service - OpenSSH per-connection server daemon (139.178.68.195:40792). Mar 19 11:33:44.788404 sshd[2248]: Accepted publickey for core from 139.178.68.195 port 40792 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:33:44.789133 sshd-session[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:44.800728 systemd-logind[1931]: New session 7 of user core. Mar 19 11:33:44.810647 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 19 11:33:44.881592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:33:44.881938 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:33:44.938189 sudo[2261]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 19 11:33:44.940169 sudo[2261]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:33:44.957689 sudo[2261]: pam_unix(sudo:session): session closed for user root Mar 19 11:33:44.970176 kubelet[2256]: E0319 11:33:44.970059 2256 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:33:44.977361 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:33:44.977719 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:33:44.978763 systemd[1]: kubelet.service: Consumed 283ms CPU time, 97.1M memory peak. Mar 19 11:33:44.984948 sshd[2250]: Connection closed by 139.178.68.195 port 40792 Mar 19 11:33:44.984774 sshd-session[2248]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:44.990631 systemd[1]: sshd@6-172.31.31.152:22-139.178.68.195:40792.service: Deactivated successfully. Mar 19 11:33:44.993663 systemd[1]: session-7.scope: Deactivated successfully. Mar 19 11:33:44.997953 systemd-logind[1931]: Session 7 logged out. Waiting for processes to exit. Mar 19 11:33:44.999742 systemd-logind[1931]: Removed session 7. Mar 19 11:33:45.022842 systemd[1]: Started sshd@7-172.31.31.152:22-139.178.68.195:40804.service - OpenSSH per-connection server daemon (139.178.68.195:40804). Mar 19 11:33:45.212039 sshd[2269]: Accepted publickey for core from 139.178.68.195 port 40804 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:33:45.214839 sshd-session[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:45.223521 systemd-logind[1931]: New session 8 of user core. Mar 19 11:33:45.226588 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 19 11:33:45.331111 sudo[2273]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 19 11:33:45.332258 sudo[2273]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:33:45.338469 sudo[2273]: pam_unix(sudo:session): session closed for user root Mar 19 11:33:45.348196 sudo[2272]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 19 11:33:45.349507 sudo[2272]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:33:45.373918 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:33:45.420645 augenrules[2295]: No rules Mar 19 11:33:45.423242 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:33:45.424467 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:33:45.426272 sudo[2272]: pam_unix(sudo:session): session closed for user root Mar 19 11:33:45.449564 sshd[2271]: Connection closed by 139.178.68.195 port 40804 Mar 19 11:33:45.450321 sshd-session[2269]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:45.457024 systemd[1]: sshd@7-172.31.31.152:22-139.178.68.195:40804.service: Deactivated successfully. Mar 19 11:33:45.460089 systemd[1]: session-8.scope: Deactivated successfully. Mar 19 11:33:45.461475 systemd-logind[1931]: Session 8 logged out. Waiting for processes to exit. Mar 19 11:33:45.463386 systemd-logind[1931]: Removed session 8. Mar 19 11:33:45.494803 systemd[1]: Started sshd@8-172.31.31.152:22-139.178.68.195:40806.service - OpenSSH per-connection server daemon (139.178.68.195:40806). Mar 19 11:33:45.675189 sshd[2304]: Accepted publickey for core from 139.178.68.195 port 40806 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:33:45.677562 sshd-session[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:45.685255 systemd-logind[1931]: New session 9 of user core. Mar 19 11:33:45.697596 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 19 11:33:45.801071 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 19 11:33:45.802221 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:33:46.338777 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 19 11:33:46.338928 (dockerd)[2324]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 19 11:33:46.670481 dockerd[2324]: time="2025-03-19T11:33:46.669903845Z" level=info msg="Starting up" Mar 19 11:33:46.813858 dockerd[2324]: time="2025-03-19T11:33:46.813417222Z" level=info msg="Loading containers: start." Mar 19 11:33:47.052366 kernel: Initializing XFRM netlink socket Mar 19 11:33:47.084000 (udev-worker)[2347]: Network interface NamePolicy= disabled on kernel command line. Mar 19 11:33:47.182147 systemd-networkd[1848]: docker0: Link UP Mar 19 11:33:47.222618 dockerd[2324]: time="2025-03-19T11:33:47.222471352Z" level=info msg="Loading containers: done." Mar 19 11:33:47.245016 dockerd[2324]: time="2025-03-19T11:33:47.244931704Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 19 11:33:47.245237 dockerd[2324]: time="2025-03-19T11:33:47.245079652Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 19 11:33:47.245385 dockerd[2324]: time="2025-03-19T11:33:47.245290096Z" level=info msg="Daemon has completed initialization" Mar 19 11:33:47.296272 dockerd[2324]: time="2025-03-19T11:33:47.296121100Z" level=info msg="API listen on /run/docker.sock" Mar 19 11:33:47.296764 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 19 11:33:48.372984 containerd[1956]: time="2025-03-19T11:33:48.372911525Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 19 11:33:49.001831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3246923024.mount: Deactivated successfully. Mar 19 11:33:50.979559 containerd[1956]: time="2025-03-19T11:33:50.979477606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:50.981582 containerd[1956]: time="2025-03-19T11:33:50.981496462Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=25552766" Mar 19 11:33:50.982401 containerd[1956]: time="2025-03-19T11:33:50.982295122Z" level=info msg="ImageCreate event name:\"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:50.987885 containerd[1956]: time="2025-03-19T11:33:50.987836014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:50.990466 containerd[1956]: time="2025-03-19T11:33:50.990168466Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"25549566\" in 2.617192201s" Mar 19 11:33:50.990466 containerd[1956]: time="2025-03-19T11:33:50.990231994Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\"" Mar 19 11:33:50.990466 containerd[1956]: time="2025-03-19T11:33:50.991001566Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 19 11:33:53.073272 containerd[1956]: time="2025-03-19T11:33:53.072962589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:53.075012 containerd[1956]: time="2025-03-19T11:33:53.074931261Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=22458978" Mar 19 11:33:53.077211 containerd[1956]: time="2025-03-19T11:33:53.077162205Z" level=info msg="ImageCreate event name:\"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:53.082863 containerd[1956]: time="2025-03-19T11:33:53.082781613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:53.085515 containerd[1956]: time="2025-03-19T11:33:53.085146201Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"23899774\" in 2.094097775s" Mar 19 11:33:53.085515 containerd[1956]: time="2025-03-19T11:33:53.085211553Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\"" Mar 19 11:33:53.086483 containerd[1956]: time="2025-03-19T11:33:53.086138469Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 19 11:33:54.823666 containerd[1956]: time="2025-03-19T11:33:54.822307249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:54.824636 containerd[1956]: time="2025-03-19T11:33:54.824563969Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=17125829" Mar 19 11:33:54.825896 containerd[1956]: time="2025-03-19T11:33:54.825789877Z" level=info msg="ImageCreate event name:\"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:54.832599 containerd[1956]: time="2025-03-19T11:33:54.832493054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:54.835715 containerd[1956]: time="2025-03-19T11:33:54.834612194Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"18566643\" in 1.748413149s" Mar 19 11:33:54.835715 containerd[1956]: time="2025-03-19T11:33:54.834670790Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\"" Mar 19 11:33:54.835715 containerd[1956]: time="2025-03-19T11:33:54.835237622Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 19 11:33:55.219512 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 19 11:33:55.227731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:33:55.544690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:33:55.551827 (kubelet)[2579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:33:55.650767 kubelet[2579]: E0319 11:33:55.650707 2579 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:33:55.654372 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:33:55.655247 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:33:55.655827 systemd[1]: kubelet.service: Consumed 277ms CPU time, 97.2M memory peak. Mar 19 11:33:56.230951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1835513435.mount: Deactivated successfully. Mar 19 11:33:56.904002 containerd[1956]: time="2025-03-19T11:33:56.903937648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:56.905474 containerd[1956]: time="2025-03-19T11:33:56.905369908Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=26871915" Mar 19 11:33:56.906580 containerd[1956]: time="2025-03-19T11:33:56.906526696Z" level=info msg="ImageCreate event name:\"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:56.910280 containerd[1956]: time="2025-03-19T11:33:56.910192900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:56.911957 containerd[1956]: time="2025-03-19T11:33:56.911772892Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"26870934\" in 2.076493354s" Mar 19 11:33:56.911957 containerd[1956]: time="2025-03-19T11:33:56.911821936Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 19 11:33:56.912587 containerd[1956]: time="2025-03-19T11:33:56.912541336Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 19 11:33:57.419329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1417629011.mount: Deactivated successfully. Mar 19 11:33:58.471420 containerd[1956]: time="2025-03-19T11:33:58.471309796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:58.473608 containerd[1956]: time="2025-03-19T11:33:58.473539924Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Mar 19 11:33:58.476069 containerd[1956]: time="2025-03-19T11:33:58.475998076Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:58.482131 containerd[1956]: time="2025-03-19T11:33:58.482052892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:58.485387 containerd[1956]: time="2025-03-19T11:33:58.484166260Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.571565572s" Mar 19 11:33:58.485387 containerd[1956]: time="2025-03-19T11:33:58.484222672Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 19 11:33:58.486361 containerd[1956]: time="2025-03-19T11:33:58.486294640Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 19 11:33:59.011029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3163063294.mount: Deactivated successfully. Mar 19 11:33:59.022623 containerd[1956]: time="2025-03-19T11:33:59.022553402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:59.025475 containerd[1956]: time="2025-03-19T11:33:59.025410638Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 19 11:33:59.027723 containerd[1956]: time="2025-03-19T11:33:59.027654626Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:59.032613 containerd[1956]: time="2025-03-19T11:33:59.032550926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:59.034781 containerd[1956]: time="2025-03-19T11:33:59.034371926Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 548.000258ms" Mar 19 11:33:59.034781 containerd[1956]: time="2025-03-19T11:33:59.034422734Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 19 11:33:59.035044 containerd[1956]: time="2025-03-19T11:33:59.034982102Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 19 11:33:59.701674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount836928383.mount: Deactivated successfully. Mar 19 11:34:00.550051 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 19 11:34:03.483742 containerd[1956]: time="2025-03-19T11:34:03.482860352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:34:03.485313 containerd[1956]: time="2025-03-19T11:34:03.485231877Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Mar 19 11:34:03.487507 containerd[1956]: time="2025-03-19T11:34:03.487454613Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:34:03.494079 containerd[1956]: time="2025-03-19T11:34:03.493988481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:34:03.496993 containerd[1956]: time="2025-03-19T11:34:03.496804197Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.461771875s" Mar 19 11:34:03.496993 containerd[1956]: time="2025-03-19T11:34:03.496854369Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Mar 19 11:34:05.719548 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 19 11:34:05.728882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:34:06.034843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:34:06.036603 (kubelet)[2729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:34:06.113425 kubelet[2729]: E0319 11:34:06.112063 2729 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:34:06.117169 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:34:06.117709 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:34:06.118257 systemd[1]: kubelet.service: Consumed 256ms CPU time, 92.5M memory peak. Mar 19 11:34:11.301889 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:34:11.302225 systemd[1]: kubelet.service: Consumed 256ms CPU time, 92.5M memory peak. Mar 19 11:34:11.313831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:34:11.374778 systemd[1]: Reload requested from client PID 2744 ('systemctl') (unit session-9.scope)... Mar 19 11:34:11.374815 systemd[1]: Reloading... Mar 19 11:34:11.643372 zram_generator::config[2792]: No configuration found. Mar 19 11:34:11.859493 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:34:12.079886 systemd[1]: Reloading finished in 704 ms. Mar 19 11:34:12.166221 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:34:12.180207 (kubelet)[2843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:34:12.182196 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:34:12.184915 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:34:12.185508 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:34:12.185608 systemd[1]: kubelet.service: Consumed 189ms CPU time, 82.3M memory peak. Mar 19 11:34:12.193894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:34:12.468635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:34:12.479863 (kubelet)[2855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:34:12.551406 kubelet[2855]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:34:12.551406 kubelet[2855]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 11:34:12.551406 kubelet[2855]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:34:12.551406 kubelet[2855]: I0319 11:34:12.550410 2855 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:34:13.352973 kubelet[2855]: I0319 11:34:13.352904 2855 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 19 11:34:13.352973 kubelet[2855]: I0319 11:34:13.352957 2855 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:34:13.353796 kubelet[2855]: I0319 11:34:13.353748 2855 server.go:929] "Client rotation is on, will bootstrap in background" Mar 19 11:34:13.395772 kubelet[2855]: E0319 11:34:13.395708 2855 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.31.152:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.152:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:34:13.398048 kubelet[2855]: I0319 11:34:13.397987 2855 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:34:13.409373 kubelet[2855]: E0319 11:34:13.409305 2855 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 19 11:34:13.409631 kubelet[2855]: I0319 11:34:13.409609 2855 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 19 11:34:13.416014 kubelet[2855]: I0319 11:34:13.415959 2855 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:34:13.417371 kubelet[2855]: I0319 11:34:13.417313 2855 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 19 11:34:13.417832 kubelet[2855]: I0319 11:34:13.417769 2855 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:34:13.418121 kubelet[2855]: I0319 11:34:13.417826 2855 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-152","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 11:34:13.418291 kubelet[2855]: I0319 11:34:13.418169 2855 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:34:13.418291 kubelet[2855]: I0319 11:34:13.418190 2855 container_manager_linux.go:300] "Creating device plugin manager" Mar 19 11:34:13.418450 kubelet[2855]: I0319 11:34:13.418438 2855 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:34:13.423631 kubelet[2855]: I0319 11:34:13.422955 2855 kubelet.go:408] "Attempting to sync node with API server" Mar 19 11:34:13.423631 kubelet[2855]: I0319 11:34:13.423011 2855 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:34:13.423631 kubelet[2855]: I0319 11:34:13.423059 2855 kubelet.go:314] "Adding apiserver pod source" Mar 19 11:34:13.423631 kubelet[2855]: I0319 11:34:13.423080 2855 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:34:13.429645 kubelet[2855]: I0319 11:34:13.429611 2855 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:34:13.432890 kubelet[2855]: I0319 11:34:13.432849 2855 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:34:13.435795 kubelet[2855]: W0319 11:34:13.434599 2855 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 19 11:34:13.438380 kubelet[2855]: I0319 11:34:13.437505 2855 server.go:1269] "Started kubelet" Mar 19 11:34:13.438380 kubelet[2855]: W0319 11:34:13.437716 2855 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.152:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-152&limit=500&resourceVersion=0": dial tcp 172.31.31.152:6443: connect: connection refused Mar 19 11:34:13.438380 kubelet[2855]: E0319 11:34:13.437787 2855 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.152:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-152&limit=500&resourceVersion=0\": dial tcp 172.31.31.152:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:34:13.438380 kubelet[2855]: W0319 11:34:13.438015 2855 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.152:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.152:6443: connect: connection refused Mar 19 11:34:13.438380 kubelet[2855]: E0319 11:34:13.438077 2855 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.152:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.152:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:34:13.438380 kubelet[2855]: I0319 11:34:13.438231 2855 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:34:13.439112 kubelet[2855]: I0319 11:34:13.439026 2855 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:34:13.446477 kubelet[2855]: I0319 11:34:13.446424 2855 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:34:13.451074 kubelet[2855]: I0319 11:34:13.451016 2855 server.go:460] "Adding debug handlers to kubelet server" Mar 19 11:34:13.451830 kubelet[2855]: I0319 11:34:13.451799 2855 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:34:13.459702 kubelet[2855]: I0319 11:34:13.458679 2855 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 19 11:34:13.459702 kubelet[2855]: I0319 11:34:13.458983 2855 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 19 11:34:13.460715 kubelet[2855]: E0319 11:34:13.459329 2855 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-152\" not found" Mar 19 11:34:13.463391 kubelet[2855]: E0319 11:34:13.459323 2855 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.152:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.152:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-152.182e3110a7720fe6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-152,UID:ip-172-31-31-152,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-152,},FirstTimestamp:2025-03-19 11:34:13.437468646 +0000 UTC m=+0.951480714,LastTimestamp:2025-03-19 11:34:13.437468646 +0000 UTC m=+0.951480714,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-152,}" Mar 19 11:34:13.463391 kubelet[2855]: E0319 11:34:13.462476 2855 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-152?timeout=10s\": dial tcp 172.31.31.152:6443: connect: connection refused" interval="200ms" Mar 19 11:34:13.467861 kubelet[2855]: W0319 11:34:13.467657 2855 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.152:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.152:6443: connect: connection refused Mar 19 11:34:13.467861 kubelet[2855]: E0319 11:34:13.467773 2855 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.152:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.152:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:34:13.468395 kubelet[2855]: I0319 11:34:13.468324 2855 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:34:13.469509 kubelet[2855]: I0319 11:34:13.468490 2855 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:34:13.471030 kubelet[2855]: I0319 11:34:13.470984 2855 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 19 11:34:13.471143 kubelet[2855]: I0319 11:34:13.471120 2855 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:34:13.473466 kubelet[2855]: E0319 11:34:13.473370 2855 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:34:13.473646 kubelet[2855]: I0319 11:34:13.473602 2855 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:34:13.498430 kubelet[2855]: I0319 11:34:13.498368 2855 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:34:13.500669 kubelet[2855]: I0319 11:34:13.500628 2855 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:34:13.501252 kubelet[2855]: I0319 11:34:13.500829 2855 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 11:34:13.501252 kubelet[2855]: I0319 11:34:13.500866 2855 kubelet.go:2321] "Starting kubelet main sync loop" Mar 19 11:34:13.501252 kubelet[2855]: E0319 11:34:13.500935 2855 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:34:13.512517 kubelet[2855]: W0319 11:34:13.512440 2855 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.152:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.152:6443: connect: connection refused Mar 19 11:34:13.513563 kubelet[2855]: E0319 11:34:13.513239 2855 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.152:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.152:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:34:13.526654 kubelet[2855]: I0319 11:34:13.526613 2855 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 19 11:34:13.526654 kubelet[2855]: I0319 11:34:13.526646 2855 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 19 11:34:13.526841 kubelet[2855]: I0319 11:34:13.526678 2855 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:34:13.533988 kubelet[2855]: I0319 11:34:13.533934 2855 policy_none.go:49] "None policy: Start" Mar 19 11:34:13.535577 kubelet[2855]: I0319 11:34:13.534993 2855 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 11:34:13.535577 kubelet[2855]: I0319 11:34:13.535034 2855 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:34:13.550408 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 19 11:34:13.560593 kubelet[2855]: E0319 11:34:13.560550 2855 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-152\" not found" Mar 19 11:34:13.563867 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 19 11:34:13.581731 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 19 11:34:13.585383 kubelet[2855]: I0319 11:34:13.585227 2855 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:34:13.585715 kubelet[2855]: I0319 11:34:13.585562 2855 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 11:34:13.585715 kubelet[2855]: I0319 11:34:13.585594 2855 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:34:13.586277 kubelet[2855]: I0319 11:34:13.586242 2855 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:34:13.588998 kubelet[2855]: E0319 11:34:13.588841 2855 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-152\" not found" Mar 19 11:34:13.620883 systemd[1]: Created slice kubepods-burstable-pod148cb497f2610ef1cc59565e37707e3f.slice - libcontainer container kubepods-burstable-pod148cb497f2610ef1cc59565e37707e3f.slice. Mar 19 11:34:13.650638 systemd[1]: Created slice kubepods-burstable-pod3812d10f2b1c73b324f2ad2acd832cff.slice - libcontainer container kubepods-burstable-pod3812d10f2b1c73b324f2ad2acd832cff.slice. Mar 19 11:34:13.663049 kubelet[2855]: E0319 11:34:13.662939 2855 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-152?timeout=10s\": dial tcp 172.31.31.152:6443: connect: connection refused" interval="400ms" Mar 19 11:34:13.670566 systemd[1]: Created slice kubepods-burstable-pod5c781db6d6cfbac3240484d00a29e435.slice - libcontainer container kubepods-burstable-pod5c781db6d6cfbac3240484d00a29e435.slice. Mar 19 11:34:13.672218 kubelet[2855]: I0319 11:34:13.671531 2855 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/148cb497f2610ef1cc59565e37707e3f-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-152\" (UID: \"148cb497f2610ef1cc59565e37707e3f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-152" Mar 19 11:34:13.672218 kubelet[2855]: I0319 11:34:13.671578 2855 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/148cb497f2610ef1cc59565e37707e3f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-152\" (UID: \"148cb497f2610ef1cc59565e37707e3f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-152" Mar 19 11:34:13.672218 kubelet[2855]: I0319 11:34:13.671614 2855 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3812d10f2b1c73b324f2ad2acd832cff-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-152\" (UID: \"3812d10f2b1c73b324f2ad2acd832cff\") " pod="kube-system/kube-scheduler-ip-172-31-31-152" Mar 19 11:34:13.672218 kubelet[2855]: I0319 11:34:13.671649 2855 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c781db6d6cfbac3240484d00a29e435-ca-certs\") pod \"kube-apiserver-ip-172-31-31-152\" (UID: \"5c781db6d6cfbac3240484d00a29e435\") " pod="kube-system/kube-apiserver-ip-172-31-31-152" Mar 19 11:34:13.672218 kubelet[2855]: I0319 11:34:13.671712 2855 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c781db6d6cfbac3240484d00a29e435-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-152\" (UID: \"5c781db6d6cfbac3240484d00a29e435\") " pod="kube-system/kube-apiserver-ip-172-31-31-152" Mar 19 11:34:13.672560 kubelet[2855]: I0319 11:34:13.671792 2855 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c781db6d6cfbac3240484d00a29e435-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-152\" (UID: \"5c781db6d6cfbac3240484d00a29e435\") " pod="kube-system/kube-apiserver-ip-172-31-31-152" Mar 19 11:34:13.672560 kubelet[2855]: I0319 11:34:13.671849 2855 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/148cb497f2610ef1cc59565e37707e3f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-152\" (UID: \"148cb497f2610ef1cc59565e37707e3f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-152" Mar 19 11:34:13.672560 kubelet[2855]: I0319 11:34:13.671901 2855 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/148cb497f2610ef1cc59565e37707e3f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-152\" (UID: \"148cb497f2610ef1cc59565e37707e3f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-152" Mar 19 11:34:13.672560 kubelet[2855]: I0319 11:34:13.671952 2855 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/148cb497f2610ef1cc59565e37707e3f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-152\" (UID: \"148cb497f2610ef1cc59565e37707e3f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-152" Mar 19 11:34:13.688724 kubelet[2855]: I0319 11:34:13.688679 2855 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-152" Mar 19 11:34:13.689272 kubelet[2855]: E0319 11:34:13.689194 2855 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.152:6443/api/v1/nodes\": dial tcp 172.31.31.152:6443: connect: connection refused" node="ip-172-31-31-152" Mar 19 11:34:13.878077 kubelet[2855]: E0319 11:34:13.877811 2855 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.152:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.152:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-152.182e3110a7720fe6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-152,UID:ip-172-31-31-152,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-152,},FirstTimestamp:2025-03-19 11:34:13.437468646 +0000 UTC m=+0.951480714,LastTimestamp:2025-03-19 11:34:13.437468646 +0000 UTC m=+0.951480714,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-152,}" Mar 19 11:34:13.892129 kubelet[2855]: I0319 11:34:13.892072 2855 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-152" Mar 19 11:34:13.892651 kubelet[2855]: E0319 11:34:13.892591 2855 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.152:6443/api/v1/nodes\": dial tcp 172.31.31.152:6443: connect: connection refused" node="ip-172-31-31-152" Mar 19 11:34:13.948174 containerd[1956]: time="2025-03-19T11:34:13.948119900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-152,Uid:148cb497f2610ef1cc59565e37707e3f,Namespace:kube-system,Attempt:0,}" Mar 19 11:34:13.966940 containerd[1956]: time="2025-03-19T11:34:13.966866577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-152,Uid:3812d10f2b1c73b324f2ad2acd832cff,Namespace:kube-system,Attempt:0,}" Mar 19 11:34:13.976844 containerd[1956]: time="2025-03-19T11:34:13.976690221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-152,Uid:5c781db6d6cfbac3240484d00a29e435,Namespace:kube-system,Attempt:0,}" Mar 19 11:34:14.064558 kubelet[2855]: E0319 11:34:14.064489 2855 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-152?timeout=10s\": dial tcp 172.31.31.152:6443: connect: connection refused" interval="800ms" Mar 19 11:34:14.241167 kubelet[2855]: W0319 11:34:14.241077 2855 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.152:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-152&limit=500&resourceVersion=0": dial tcp 172.31.31.152:6443: connect: connection refused Mar 19 11:34:14.241376 kubelet[2855]: E0319 11:34:14.241177 2855 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.152:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-152&limit=500&resourceVersion=0\": dial tcp 172.31.31.152:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:34:14.295428 kubelet[2855]: I0319 11:34:14.295386 2855 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-152" Mar 19 11:34:14.295955 kubelet[2855]: E0319 11:34:14.295870 2855 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.152:6443/api/v1/nodes\": dial tcp 172.31.31.152:6443: connect: connection refused" node="ip-172-31-31-152" Mar 19 11:34:14.384486 kubelet[2855]: W0319 11:34:14.384275 2855 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.152:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.152:6443: connect: connection refused Mar 19 11:34:14.384486 kubelet[2855]: E0319 11:34:14.384392 2855 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.152:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.152:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:34:14.402471 kubelet[2855]: W0319 11:34:14.402379 2855 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.152:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.152:6443: connect: connection refused Mar 19 11:34:14.402625 kubelet[2855]: E0319 11:34:14.402483 2855 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.152:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.152:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:34:14.423456 update_engine[1932]: I20250319 11:34:14.423374 1932 update_attempter.cc:509] Updating boot flags... Mar 19 11:34:14.480141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2254736668.mount: Deactivated successfully. Mar 19 11:34:14.505931 containerd[1956]: time="2025-03-19T11:34:14.505732471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:34:14.515976 containerd[1956]: time="2025-03-19T11:34:14.515893567Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 19 11:34:14.517439 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2905) Mar 19 11:34:14.521779 containerd[1956]: time="2025-03-19T11:34:14.521708791Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:34:14.531852 containerd[1956]: time="2025-03-19T11:34:14.529303699Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:34:14.531852 containerd[1956]: time="2025-03-19T11:34:14.531156355Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:34:14.533952 containerd[1956]: time="2025-03-19T11:34:14.533858227Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:34:14.536747 containerd[1956]: time="2025-03-19T11:34:14.536575915Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:34:14.545463 containerd[1956]: time="2025-03-19T11:34:14.545399215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:34:14.552320 containerd[1956]: time="2025-03-19T11:34:14.552260263Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 575.40833ms" Mar 19 11:34:14.557968 containerd[1956]: time="2025-03-19T11:34:14.557885876Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 609.6511ms" Mar 19 11:34:14.564097 containerd[1956]: time="2025-03-19T11:34:14.564023996Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 597.034971ms" Mar 19 11:34:14.603581 kubelet[2855]: W0319 11:34:14.603515 2855 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.152:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.152:6443: connect: connection refused Mar 19 11:34:14.604121 kubelet[2855]: E0319 11:34:14.603588 2855 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.152:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.152:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:34:14.866128 kubelet[2855]: E0319 11:34:14.865854 2855 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-152?timeout=10s\": dial tcp 172.31.31.152:6443: connect: connection refused" interval="1.6s" Mar 19 11:34:14.874211 containerd[1956]: time="2025-03-19T11:34:14.873561957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:34:14.874921 containerd[1956]: time="2025-03-19T11:34:14.874848777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:34:14.883975 containerd[1956]: time="2025-03-19T11:34:14.876472269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:14.883975 containerd[1956]: time="2025-03-19T11:34:14.877240317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:14.943817 containerd[1956]: time="2025-03-19T11:34:14.942362601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:34:14.943817 containerd[1956]: time="2025-03-19T11:34:14.942457341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:34:14.943817 containerd[1956]: time="2025-03-19T11:34:14.942484173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:14.943817 containerd[1956]: time="2025-03-19T11:34:14.942622833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:14.954163 containerd[1956]: time="2025-03-19T11:34:14.953681073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:34:14.954163 containerd[1956]: time="2025-03-19T11:34:14.953779773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:34:14.954163 containerd[1956]: time="2025-03-19T11:34:14.953808333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:14.954163 containerd[1956]: time="2025-03-19T11:34:14.953971965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:14.954881 systemd[1]: Started cri-containerd-aedb254a9e25d98e027a09cece6a16fc95a5a1cb56d70b3fd1c24f8d80ec8e9b.scope - libcontainer container aedb254a9e25d98e027a09cece6a16fc95a5a1cb56d70b3fd1c24f8d80ec8e9b. Mar 19 11:34:15.011635 systemd[1]: Started cri-containerd-b6fe132a053e6cc31f75ced404d91eb995701c72a862f2ca9264d00abe657a29.scope - libcontainer container b6fe132a053e6cc31f75ced404d91eb995701c72a862f2ca9264d00abe657a29. Mar 19 11:34:15.014836 systemd[1]: Started cri-containerd-f4bb3ea63efbf712d503807d38cc9927bdb7d2382805290348ebe01d82cae10b.scope - libcontainer container f4bb3ea63efbf712d503807d38cc9927bdb7d2382805290348ebe01d82cae10b. Mar 19 11:34:15.083385 containerd[1956]: time="2025-03-19T11:34:15.083059014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-152,Uid:148cb497f2610ef1cc59565e37707e3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"aedb254a9e25d98e027a09cece6a16fc95a5a1cb56d70b3fd1c24f8d80ec8e9b\"" Mar 19 11:34:15.091574 containerd[1956]: time="2025-03-19T11:34:15.091522866Z" level=info msg="CreateContainer within sandbox \"aedb254a9e25d98e027a09cece6a16fc95a5a1cb56d70b3fd1c24f8d80ec8e9b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 19 11:34:15.104314 kubelet[2855]: I0319 11:34:15.103592 2855 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-152" Mar 19 11:34:15.104314 kubelet[2855]: E0319 11:34:15.104184 2855 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.152:6443/api/v1/nodes\": dial tcp 172.31.31.152:6443: connect: connection refused" node="ip-172-31-31-152" Mar 19 11:34:15.127781 containerd[1956]: time="2025-03-19T11:34:15.127227498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-152,Uid:5c781db6d6cfbac3240484d00a29e435,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6fe132a053e6cc31f75ced404d91eb995701c72a862f2ca9264d00abe657a29\"" Mar 19 11:34:15.136771 containerd[1956]: time="2025-03-19T11:34:15.136576386Z" level=info msg="CreateContainer within sandbox \"b6fe132a053e6cc31f75ced404d91eb995701c72a862f2ca9264d00abe657a29\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 19 11:34:15.140971 containerd[1956]: time="2025-03-19T11:34:15.140888310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-152,Uid:3812d10f2b1c73b324f2ad2acd832cff,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4bb3ea63efbf712d503807d38cc9927bdb7d2382805290348ebe01d82cae10b\"" Mar 19 11:34:15.146004 containerd[1956]: time="2025-03-19T11:34:15.145918854Z" level=info msg="CreateContainer within sandbox \"aedb254a9e25d98e027a09cece6a16fc95a5a1cb56d70b3fd1c24f8d80ec8e9b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"18d5f8b4e7fd49e798697a20ceabf489adcf5541112494c52b9b68179d57bd47\"" Mar 19 11:34:15.147321 containerd[1956]: time="2025-03-19T11:34:15.147279534Z" level=info msg="CreateContainer within sandbox \"f4bb3ea63efbf712d503807d38cc9927bdb7d2382805290348ebe01d82cae10b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 19 11:34:15.148014 containerd[1956]: time="2025-03-19T11:34:15.147708366Z" level=info msg="StartContainer for \"18d5f8b4e7fd49e798697a20ceabf489adcf5541112494c52b9b68179d57bd47\"" Mar 19 11:34:15.184704 containerd[1956]: time="2025-03-19T11:34:15.184642123Z" level=info msg="CreateContainer within sandbox \"b6fe132a053e6cc31f75ced404d91eb995701c72a862f2ca9264d00abe657a29\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c738f0c92dbcaed7eca1ca93cb6c2e499bf77848588e544cccc9a01b884f92bb\"" Mar 19 11:34:15.188412 containerd[1956]: time="2025-03-19T11:34:15.186800371Z" level=info msg="StartContainer for \"c738f0c92dbcaed7eca1ca93cb6c2e499bf77848588e544cccc9a01b884f92bb\"" Mar 19 11:34:15.197028 systemd[1]: Started cri-containerd-18d5f8b4e7fd49e798697a20ceabf489adcf5541112494c52b9b68179d57bd47.scope - libcontainer container 18d5f8b4e7fd49e798697a20ceabf489adcf5541112494c52b9b68179d57bd47. Mar 19 11:34:15.197660 containerd[1956]: time="2025-03-19T11:34:15.197580823Z" level=info msg="CreateContainer within sandbox \"f4bb3ea63efbf712d503807d38cc9927bdb7d2382805290348ebe01d82cae10b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"05d6d0f292b81ba8711813cb05bf60d737f7f982fc038be61310e419ab72027d\"" Mar 19 11:34:15.199972 containerd[1956]: time="2025-03-19T11:34:15.199914055Z" level=info msg="StartContainer for \"05d6d0f292b81ba8711813cb05bf60d737f7f982fc038be61310e419ab72027d\"" Mar 19 11:34:15.264441 systemd[1]: Started cri-containerd-c738f0c92dbcaed7eca1ca93cb6c2e499bf77848588e544cccc9a01b884f92bb.scope - libcontainer container c738f0c92dbcaed7eca1ca93cb6c2e499bf77848588e544cccc9a01b884f92bb. Mar 19 11:34:15.286010 systemd[1]: Started cri-containerd-05d6d0f292b81ba8711813cb05bf60d737f7f982fc038be61310e419ab72027d.scope - libcontainer container 05d6d0f292b81ba8711813cb05bf60d737f7f982fc038be61310e419ab72027d. Mar 19 11:34:15.330193 containerd[1956]: time="2025-03-19T11:34:15.329986663Z" level=info msg="StartContainer for \"18d5f8b4e7fd49e798697a20ceabf489adcf5541112494c52b9b68179d57bd47\" returns successfully" Mar 19 11:34:15.390716 containerd[1956]: time="2025-03-19T11:34:15.390514472Z" level=info msg="StartContainer for \"c738f0c92dbcaed7eca1ca93cb6c2e499bf77848588e544cccc9a01b884f92bb\" returns successfully" Mar 19 11:34:15.453932 containerd[1956]: time="2025-03-19T11:34:15.453842120Z" level=info msg="StartContainer for \"05d6d0f292b81ba8711813cb05bf60d737f7f982fc038be61310e419ab72027d\" returns successfully" Mar 19 11:34:16.707290 kubelet[2855]: I0319 11:34:16.707242 2855 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-152" Mar 19 11:34:19.995174 kubelet[2855]: E0319 11:34:19.995120 2855 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-152\" not found" node="ip-172-31-31-152" Mar 19 11:34:20.073920 kubelet[2855]: I0319 11:34:20.073863 2855 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-152" Mar 19 11:34:20.445117 kubelet[2855]: I0319 11:34:20.444558 2855 apiserver.go:52] "Watching apiserver" Mar 19 11:34:20.471741 kubelet[2855]: I0319 11:34:20.471698 2855 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 19 11:34:21.957098 systemd[1]: Reload requested from client PID 3224 ('systemctl') (unit session-9.scope)... Mar 19 11:34:21.957127 systemd[1]: Reloading... Mar 19 11:34:22.175686 zram_generator::config[3284]: No configuration found. Mar 19 11:34:22.387494 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:34:22.646261 systemd[1]: Reloading finished in 688 ms. Mar 19 11:34:22.692439 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:34:22.711036 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:34:22.711577 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:34:22.711677 systemd[1]: kubelet.service: Consumed 1.625s CPU time, 116.3M memory peak. Mar 19 11:34:22.716993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:34:23.031619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:34:23.044011 (kubelet)[3329]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:34:23.137306 kubelet[3329]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:34:23.137306 kubelet[3329]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 11:34:23.137306 kubelet[3329]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:34:23.137306 kubelet[3329]: I0319 11:34:23.137524 3329 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:34:23.154381 kubelet[3329]: I0319 11:34:23.153777 3329 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 19 11:34:23.154381 kubelet[3329]: I0319 11:34:23.153826 3329 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:34:23.154381 kubelet[3329]: I0319 11:34:23.154215 3329 server.go:929] "Client rotation is on, will bootstrap in background" Mar 19 11:34:23.158063 kubelet[3329]: I0319 11:34:23.158021 3329 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 19 11:34:23.169886 kubelet[3329]: I0319 11:34:23.169830 3329 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:34:23.178627 kubelet[3329]: E0319 11:34:23.178516 3329 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 19 11:34:23.178627 kubelet[3329]: I0319 11:34:23.178609 3329 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 19 11:34:23.188275 sudo[3343]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 19 11:34:23.188961 sudo[3343]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 19 11:34:23.190702 kubelet[3329]: I0319 11:34:23.190208 3329 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:34:23.190702 kubelet[3329]: I0319 11:34:23.190508 3329 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 19 11:34:23.191265 kubelet[3329]: I0319 11:34:23.191199 3329 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:34:23.191699 kubelet[3329]: I0319 11:34:23.191260 3329 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-152","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 11:34:23.191872 kubelet[3329]: I0319 11:34:23.191826 3329 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:34:23.191872 kubelet[3329]: I0319 11:34:23.191851 3329 container_manager_linux.go:300] "Creating device plugin manager" Mar 19 11:34:23.191984 kubelet[3329]: I0319 11:34:23.191907 3329 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:34:23.192663 kubelet[3329]: I0319 11:34:23.192118 3329 kubelet.go:408] "Attempting to sync node with API server" Mar 19 11:34:23.192663 kubelet[3329]: I0319 11:34:23.192157 3329 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:34:23.192663 kubelet[3329]: I0319 11:34:23.192198 3329 kubelet.go:314] "Adding apiserver pod source" Mar 19 11:34:23.192663 kubelet[3329]: I0319 11:34:23.192281 3329 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:34:23.199063 kubelet[3329]: I0319 11:34:23.197383 3329 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:34:23.199063 kubelet[3329]: I0319 11:34:23.198199 3329 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:34:23.199063 kubelet[3329]: I0319 11:34:23.198872 3329 server.go:1269] "Started kubelet" Mar 19 11:34:23.203284 kubelet[3329]: I0319 11:34:23.203248 3329 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:34:23.216066 kubelet[3329]: I0319 11:34:23.216031 3329 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 19 11:34:23.216706 kubelet[3329]: E0319 11:34:23.216654 3329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-152\" not found" Mar 19 11:34:23.217009 kubelet[3329]: I0319 11:34:23.216946 3329 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:34:23.218040 kubelet[3329]: I0319 11:34:23.217822 3329 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 19 11:34:23.220361 kubelet[3329]: I0319 11:34:23.220296 3329 server.go:460] "Adding debug handlers to kubelet server" Mar 19 11:34:23.220652 kubelet[3329]: I0319 11:34:23.220630 3329 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:34:23.224938 kubelet[3329]: I0319 11:34:23.224882 3329 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 19 11:34:23.226022 kubelet[3329]: I0319 11:34:23.225147 3329 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:34:23.226790 kubelet[3329]: I0319 11:34:23.226761 3329 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:34:23.247003 kubelet[3329]: I0319 11:34:23.242830 3329 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:34:23.247003 kubelet[3329]: I0319 11:34:23.243030 3329 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:34:23.278382 kubelet[3329]: I0319 11:34:23.278228 3329 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:34:23.282962 kubelet[3329]: I0319 11:34:23.282920 3329 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:34:23.283172 kubelet[3329]: I0319 11:34:23.283150 3329 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 11:34:23.283315 kubelet[3329]: I0319 11:34:23.283294 3329 kubelet.go:2321] "Starting kubelet main sync loop" Mar 19 11:34:23.285567 kubelet[3329]: E0319 11:34:23.285452 3329 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:34:23.317690 kubelet[3329]: E0319 11:34:23.317640 3329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-152\" not found" Mar 19 11:34:23.325164 kubelet[3329]: I0319 11:34:23.325118 3329 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:34:23.340854 kubelet[3329]: E0319 11:34:23.337746 3329 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:34:23.385976 kubelet[3329]: E0319 11:34:23.385780 3329 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 19 11:34:23.465983 kubelet[3329]: I0319 11:34:23.465099 3329 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 19 11:34:23.465983 kubelet[3329]: I0319 11:34:23.465135 3329 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 19 11:34:23.465983 kubelet[3329]: I0319 11:34:23.465185 3329 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:34:23.466892 kubelet[3329]: I0319 11:34:23.466488 3329 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 19 11:34:23.466892 kubelet[3329]: I0319 11:34:23.466521 3329 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 19 11:34:23.466892 kubelet[3329]: I0319 11:34:23.466665 3329 policy_none.go:49] "None policy: Start" Mar 19 11:34:23.469654 kubelet[3329]: I0319 11:34:23.469604 3329 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 11:34:23.469654 kubelet[3329]: I0319 11:34:23.469660 3329 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:34:23.470648 kubelet[3329]: I0319 11:34:23.470124 3329 state_mem.go:75] "Updated machine memory state" Mar 19 11:34:23.486802 kubelet[3329]: I0319 11:34:23.486623 3329 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:34:23.486923 kubelet[3329]: I0319 11:34:23.486898 3329 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 11:34:23.487541 kubelet[3329]: I0319 11:34:23.486917 3329 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:34:23.489506 kubelet[3329]: I0319 11:34:23.489462 3329 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:34:23.611455 kubelet[3329]: I0319 11:34:23.610878 3329 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-152" Mar 19 11:34:23.623472 kubelet[3329]: I0319 11:34:23.623429 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/148cb497f2610ef1cc59565e37707e3f-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-152\" (UID: \"148cb497f2610ef1cc59565e37707e3f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-152" Mar 19 11:34:23.625384 kubelet[3329]: I0319 11:34:23.623671 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/148cb497f2610ef1cc59565e37707e3f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-152\" (UID: \"148cb497f2610ef1cc59565e37707e3f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-152" Mar 19 11:34:23.625752 kubelet[3329]: I0319 11:34:23.625520 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3812d10f2b1c73b324f2ad2acd832cff-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-152\" (UID: \"3812d10f2b1c73b324f2ad2acd832cff\") " pod="kube-system/kube-scheduler-ip-172-31-31-152" Mar 19 11:34:23.625824 kubelet[3329]: I0319 11:34:23.625761 3329 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-31-152" Mar 19 11:34:23.625896 kubelet[3329]: I0319 11:34:23.625855 3329 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-152" Mar 19 11:34:23.626204 kubelet[3329]: I0319 11:34:23.625724 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c781db6d6cfbac3240484d00a29e435-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-152\" (UID: \"5c781db6d6cfbac3240484d00a29e435\") " pod="kube-system/kube-apiserver-ip-172-31-31-152" Mar 19 11:34:23.626204 kubelet[3329]: I0319 11:34:23.626060 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/148cb497f2610ef1cc59565e37707e3f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-152\" (UID: \"148cb497f2610ef1cc59565e37707e3f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-152" Mar 19 11:34:23.626204 kubelet[3329]: I0319 11:34:23.626130 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/148cb497f2610ef1cc59565e37707e3f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-152\" (UID: \"148cb497f2610ef1cc59565e37707e3f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-152" Mar 19 11:34:23.626600 kubelet[3329]: I0319 11:34:23.626172 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/148cb497f2610ef1cc59565e37707e3f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-152\" (UID: \"148cb497f2610ef1cc59565e37707e3f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-152" Mar 19 11:34:23.626600 kubelet[3329]: I0319 11:34:23.626473 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c781db6d6cfbac3240484d00a29e435-ca-certs\") pod \"kube-apiserver-ip-172-31-31-152\" (UID: \"5c781db6d6cfbac3240484d00a29e435\") " pod="kube-system/kube-apiserver-ip-172-31-31-152" Mar 19 11:34:23.626600 kubelet[3329]: I0319 11:34:23.626536 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c781db6d6cfbac3240484d00a29e435-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-152\" (UID: \"5c781db6d6cfbac3240484d00a29e435\") " pod="kube-system/kube-apiserver-ip-172-31-31-152" Mar 19 11:34:24.159300 sudo[3343]: pam_unix(sudo:session): session closed for user root Mar 19 11:34:24.205012 kubelet[3329]: I0319 11:34:24.204946 3329 apiserver.go:52] "Watching apiserver" Mar 19 11:34:24.218386 kubelet[3329]: I0319 11:34:24.218299 3329 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 19 11:34:24.385568 kubelet[3329]: E0319 11:34:24.385509 3329 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-31-152\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-31-152" Mar 19 11:34:24.434664 kubelet[3329]: I0319 11:34:24.434466 3329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-152" podStartSLOduration=1.434442737 podStartE2EDuration="1.434442737s" podCreationTimestamp="2025-03-19 11:34:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:34:24.421456865 +0000 UTC m=+1.369477280" watchObservedRunningTime="2025-03-19 11:34:24.434442737 +0000 UTC m=+1.382463128" Mar 19 11:34:24.451378 kubelet[3329]: I0319 11:34:24.451282 3329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-152" podStartSLOduration=1.451254209 podStartE2EDuration="1.451254209s" podCreationTimestamp="2025-03-19 11:34:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:34:24.434949185 +0000 UTC m=+1.382969600" watchObservedRunningTime="2025-03-19 11:34:24.451254209 +0000 UTC m=+1.399274612" Mar 19 11:34:24.468437 kubelet[3329]: I0319 11:34:24.468315 3329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-152" podStartSLOduration=1.468269837 podStartE2EDuration="1.468269837s" podCreationTimestamp="2025-03-19 11:34:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:34:24.451714073 +0000 UTC m=+1.399734488" watchObservedRunningTime="2025-03-19 11:34:24.468269837 +0000 UTC m=+1.416290240" Mar 19 11:34:26.584045 sudo[2307]: pam_unix(sudo:session): session closed for user root Mar 19 11:34:26.607521 sshd[2306]: Connection closed by 139.178.68.195 port 40806 Mar 19 11:34:26.608366 sshd-session[2304]: pam_unix(sshd:session): session closed for user core Mar 19 11:34:26.613644 systemd[1]: sshd@8-172.31.31.152:22-139.178.68.195:40806.service: Deactivated successfully. Mar 19 11:34:26.618143 systemd[1]: session-9.scope: Deactivated successfully. Mar 19 11:34:26.619182 systemd[1]: session-9.scope: Consumed 11.216s CPU time, 262.2M memory peak. Mar 19 11:34:26.623798 systemd-logind[1931]: Session 9 logged out. Waiting for processes to exit. Mar 19 11:34:26.625692 systemd-logind[1931]: Removed session 9. Mar 19 11:34:28.476394 kubelet[3329]: I0319 11:34:28.476246 3329 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 19 11:34:28.478611 containerd[1956]: time="2025-03-19T11:34:28.477674421Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 19 11:34:28.479125 kubelet[3329]: I0319 11:34:28.478083 3329 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 19 11:34:28.796043 kubelet[3329]: W0319 11:34:28.795134 3329 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-31-152" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-152' and this object Mar 19 11:34:28.796043 kubelet[3329]: E0319 11:34:28.795211 3329 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-31-152\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-152' and this object" logger="UnhandledError" Mar 19 11:34:28.799826 systemd[1]: Created slice kubepods-besteffort-pode4b227dc_51b5_45db_9a52_0ba041b54727.slice - libcontainer container kubepods-besteffort-pode4b227dc_51b5_45db_9a52_0ba041b54727.slice. Mar 19 11:34:28.853115 systemd[1]: Created slice kubepods-burstable-podd3886f5b_16a9_404c_a86e_8e60ef9ee59b.slice - libcontainer container kubepods-burstable-podd3886f5b_16a9_404c_a86e_8e60ef9ee59b.slice. Mar 19 11:34:28.859917 kubelet[3329]: I0319 11:34:28.859857 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e4b227dc-51b5-45db-9a52-0ba041b54727-kube-proxy\") pod \"kube-proxy-thggl\" (UID: \"e4b227dc-51b5-45db-9a52-0ba041b54727\") " pod="kube-system/kube-proxy-thggl" Mar 19 11:34:28.860062 kubelet[3329]: I0319 11:34:28.859925 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4b227dc-51b5-45db-9a52-0ba041b54727-xtables-lock\") pod \"kube-proxy-thggl\" (UID: \"e4b227dc-51b5-45db-9a52-0ba041b54727\") " pod="kube-system/kube-proxy-thggl" Mar 19 11:34:28.860062 kubelet[3329]: I0319 11:34:28.859963 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-cilium-run\") pod \"cilium-ttv96\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " pod="kube-system/cilium-ttv96" Mar 19 11:34:28.860062 kubelet[3329]: I0319 11:34:28.859997 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-cilium-cgroup\") pod \"cilium-ttv96\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " pod="kube-system/cilium-ttv96" Mar 19 11:34:28.860062 kubelet[3329]: I0319 11:34:28.860034 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-cni-path\") pod \"cilium-ttv96\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " pod="kube-system/cilium-ttv96" Mar 19 11:34:28.860535 kubelet[3329]: I0319 11:34:28.860067 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-clustermesh-secrets\") pod \"cilium-ttv96\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " pod="kube-system/cilium-ttv96" Mar 19 11:34:28.860535 kubelet[3329]: I0319 11:34:28.860103 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-cilium-config-path\") pod \"cilium-ttv96\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " pod="kube-system/cilium-ttv96" Mar 19 11:34:28.860535 kubelet[3329]: I0319 11:34:28.860141 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4b227dc-51b5-45db-9a52-0ba041b54727-lib-modules\") pod \"kube-proxy-thggl\" (UID: \"e4b227dc-51b5-45db-9a52-0ba041b54727\") " pod="kube-system/kube-proxy-thggl" Mar 19 11:34:28.860535 kubelet[3329]: I0319 11:34:28.860183 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-bpf-maps\") pod \"cilium-ttv96\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " pod="kube-system/cilium-ttv96" Mar 19 11:34:28.864073 kubelet[3329]: I0319 11:34:28.860215 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-hostproc\") pod \"cilium-ttv96\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " pod="kube-system/cilium-ttv96" Mar 19 11:34:28.864073 kubelet[3329]: I0319 11:34:28.862075 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-lib-modules\") pod \"cilium-ttv96\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " pod="kube-system/cilium-ttv96" Mar 19 11:34:28.864073 kubelet[3329]: I0319 11:34:28.862176 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn784\" (UniqueName: \"kubernetes.io/projected/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-kube-api-access-jn784\") pod \"cilium-ttv96\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " pod="kube-system/cilium-ttv96" Mar 19 11:34:28.864073 kubelet[3329]: I0319 11:34:28.862262 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szbnm\" (UniqueName: \"kubernetes.io/projected/e4b227dc-51b5-45db-9a52-0ba041b54727-kube-api-access-szbnm\") pod \"kube-proxy-thggl\" (UID: \"e4b227dc-51b5-45db-9a52-0ba041b54727\") " pod="kube-system/kube-proxy-thggl" Mar 19 11:34:28.864073 kubelet[3329]: I0319 11:34:28.862391 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-host-proc-sys-kernel\") pod \"cilium-ttv96\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " pod="kube-system/cilium-ttv96" Mar 19 11:34:28.864417 kubelet[3329]: I0319 11:34:28.862512 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-hubble-tls\") pod \"cilium-ttv96\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " pod="kube-system/cilium-ttv96" Mar 19 11:34:28.864417 kubelet[3329]: I0319 11:34:28.862592 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-etc-cni-netd\") pod \"cilium-ttv96\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " pod="kube-system/cilium-ttv96" Mar 19 11:34:28.864417 kubelet[3329]: I0319 11:34:28.862644 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-xtables-lock\") pod \"cilium-ttv96\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " pod="kube-system/cilium-ttv96" Mar 19 11:34:28.864417 kubelet[3329]: I0319 11:34:28.862727 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-host-proc-sys-net\") pod \"cilium-ttv96\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " pod="kube-system/cilium-ttv96" Mar 19 11:34:28.994752 kubelet[3329]: E0319 11:34:28.994685 3329 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 19 11:34:28.994752 kubelet[3329]: E0319 11:34:28.994735 3329 projected.go:194] Error preparing data for projected volume kube-api-access-jn784 for pod kube-system/cilium-ttv96: configmap "kube-root-ca.crt" not found Mar 19 11:34:28.995114 kubelet[3329]: E0319 11:34:28.994858 3329 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-kube-api-access-jn784 podName:d3886f5b-16a9-404c-a86e-8e60ef9ee59b nodeName:}" failed. No retries permitted until 2025-03-19 11:34:29.494825103 +0000 UTC m=+6.442845482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jn784" (UniqueName: "kubernetes.io/projected/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-kube-api-access-jn784") pod "cilium-ttv96" (UID: "d3886f5b-16a9-404c-a86e-8e60ef9ee59b") : configmap "kube-root-ca.crt" not found Mar 19 11:34:29.010663 kubelet[3329]: E0319 11:34:29.009153 3329 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 19 11:34:29.010663 kubelet[3329]: E0319 11:34:29.009199 3329 projected.go:194] Error preparing data for projected volume kube-api-access-szbnm for pod kube-system/kube-proxy-thggl: configmap "kube-root-ca.crt" not found Mar 19 11:34:29.010663 kubelet[3329]: E0319 11:34:29.009279 3329 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4b227dc-51b5-45db-9a52-0ba041b54727-kube-api-access-szbnm podName:e4b227dc-51b5-45db-9a52-0ba041b54727 nodeName:}" failed. No retries permitted until 2025-03-19 11:34:29.509250255 +0000 UTC m=+6.457270646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-szbnm" (UniqueName: "kubernetes.io/projected/e4b227dc-51b5-45db-9a52-0ba041b54727-kube-api-access-szbnm") pod "kube-proxy-thggl" (UID: "e4b227dc-51b5-45db-9a52-0ba041b54727") : configmap "kube-root-ca.crt" not found Mar 19 11:34:29.614606 systemd[1]: Created slice kubepods-besteffort-pod40f2813f_e7e7_4b9f_9738_d3d8fa99388a.slice - libcontainer container kubepods-besteffort-pod40f2813f_e7e7_4b9f_9738_d3d8fa99388a.slice. Mar 19 11:34:29.668679 kubelet[3329]: I0319 11:34:29.668559 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/40f2813f-e7e7-4b9f-9738-d3d8fa99388a-cilium-config-path\") pod \"cilium-operator-5d85765b45-xblz2\" (UID: \"40f2813f-e7e7-4b9f-9738-d3d8fa99388a\") " pod="kube-system/cilium-operator-5d85765b45-xblz2" Mar 19 11:34:29.668679 kubelet[3329]: I0319 11:34:29.668655 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2v4l\" (UniqueName: \"kubernetes.io/projected/40f2813f-e7e7-4b9f-9738-d3d8fa99388a-kube-api-access-f2v4l\") pod \"cilium-operator-5d85765b45-xblz2\" (UID: \"40f2813f-e7e7-4b9f-9738-d3d8fa99388a\") " pod="kube-system/cilium-operator-5d85765b45-xblz2" Mar 19 11:34:29.767681 containerd[1956]: time="2025-03-19T11:34:29.767618231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ttv96,Uid:d3886f5b-16a9-404c-a86e-8e60ef9ee59b,Namespace:kube-system,Attempt:0,}" Mar 19 11:34:29.826268 containerd[1956]: time="2025-03-19T11:34:29.826081583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:34:29.826959 containerd[1956]: time="2025-03-19T11:34:29.826291787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:34:29.827384 containerd[1956]: time="2025-03-19T11:34:29.827283275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:29.827667 containerd[1956]: time="2025-03-19T11:34:29.827601671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:29.860650 systemd[1]: Started cri-containerd-54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328.scope - libcontainer container 54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328. Mar 19 11:34:29.904695 containerd[1956]: time="2025-03-19T11:34:29.904496304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ttv96,Uid:d3886f5b-16a9-404c-a86e-8e60ef9ee59b,Namespace:kube-system,Attempt:0,} returns sandbox id \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\"" Mar 19 11:34:29.909313 containerd[1956]: time="2025-03-19T11:34:29.909243456Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 19 11:34:29.919961 containerd[1956]: time="2025-03-19T11:34:29.919900536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xblz2,Uid:40f2813f-e7e7-4b9f-9738-d3d8fa99388a,Namespace:kube-system,Attempt:0,}" Mar 19 11:34:29.964139 containerd[1956]: time="2025-03-19T11:34:29.963964656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:34:29.964139 containerd[1956]: time="2025-03-19T11:34:29.964088484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:34:29.964577 containerd[1956]: time="2025-03-19T11:34:29.964125564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:29.964577 containerd[1956]: time="2025-03-19T11:34:29.964285200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:29.966854 kubelet[3329]: E0319 11:34:29.965455 3329 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 19 11:34:29.966854 kubelet[3329]: E0319 11:34:29.965614 3329 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4b227dc-51b5-45db-9a52-0ba041b54727-kube-proxy podName:e4b227dc-51b5-45db-9a52-0ba041b54727 nodeName:}" failed. No retries permitted until 2025-03-19 11:34:30.465561832 +0000 UTC m=+7.413582235 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/e4b227dc-51b5-45db-9a52-0ba041b54727-kube-proxy") pod "kube-proxy-thggl" (UID: "e4b227dc-51b5-45db-9a52-0ba041b54727") : failed to sync configmap cache: timed out waiting for the condition Mar 19 11:34:30.013649 systemd[1]: Started cri-containerd-01ffe4621a1515764783d6b96c462118bf085b576cfb6961b62b4676c2616541.scope - libcontainer container 01ffe4621a1515764783d6b96c462118bf085b576cfb6961b62b4676c2616541. Mar 19 11:34:30.074695 containerd[1956]: time="2025-03-19T11:34:30.074319885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xblz2,Uid:40f2813f-e7e7-4b9f-9738-d3d8fa99388a,Namespace:kube-system,Attempt:0,} returns sandbox id \"01ffe4621a1515764783d6b96c462118bf085b576cfb6961b62b4676c2616541\"" Mar 19 11:34:30.614504 containerd[1956]: time="2025-03-19T11:34:30.614432219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-thggl,Uid:e4b227dc-51b5-45db-9a52-0ba041b54727,Namespace:kube-system,Attempt:0,}" Mar 19 11:34:30.668819 containerd[1956]: time="2025-03-19T11:34:30.668656716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:34:30.668819 containerd[1956]: time="2025-03-19T11:34:30.668748648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:34:30.668819 containerd[1956]: time="2025-03-19T11:34:30.668775216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:30.669417 containerd[1956]: time="2025-03-19T11:34:30.668904720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:30.718741 systemd[1]: Started cri-containerd-73d683d6278e00f12335900521c223d2f8c4a2c9ff7196689a5d3636fd574d90.scope - libcontainer container 73d683d6278e00f12335900521c223d2f8c4a2c9ff7196689a5d3636fd574d90. Mar 19 11:34:30.762296 containerd[1956]: time="2025-03-19T11:34:30.762220164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-thggl,Uid:e4b227dc-51b5-45db-9a52-0ba041b54727,Namespace:kube-system,Attempt:0,} returns sandbox id \"73d683d6278e00f12335900521c223d2f8c4a2c9ff7196689a5d3636fd574d90\"" Mar 19 11:34:30.768417 containerd[1956]: time="2025-03-19T11:34:30.768352572Z" level=info msg="CreateContainer within sandbox \"73d683d6278e00f12335900521c223d2f8c4a2c9ff7196689a5d3636fd574d90\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 19 11:34:30.800104 containerd[1956]: time="2025-03-19T11:34:30.800044536Z" level=info msg="CreateContainer within sandbox \"73d683d6278e00f12335900521c223d2f8c4a2c9ff7196689a5d3636fd574d90\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4db74e67c7c1009aa7b14a04d37b0752a5886d86286ca5dcfc2e4bf9b4f89bfc\"" Mar 19 11:34:30.801755 containerd[1956]: time="2025-03-19T11:34:30.801684816Z" level=info msg="StartContainer for \"4db74e67c7c1009aa7b14a04d37b0752a5886d86286ca5dcfc2e4bf9b4f89bfc\"" Mar 19 11:34:30.853649 systemd[1]: Started cri-containerd-4db74e67c7c1009aa7b14a04d37b0752a5886d86286ca5dcfc2e4bf9b4f89bfc.scope - libcontainer container 4db74e67c7c1009aa7b14a04d37b0752a5886d86286ca5dcfc2e4bf9b4f89bfc. Mar 19 11:34:30.922016 containerd[1956]: time="2025-03-19T11:34:30.921381145Z" level=info msg="StartContainer for \"4db74e67c7c1009aa7b14a04d37b0752a5886d86286ca5dcfc2e4bf9b4f89bfc\" returns successfully" Mar 19 11:34:35.089057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3935602205.mount: Deactivated successfully. Mar 19 11:34:39.237320 containerd[1956]: time="2025-03-19T11:34:39.237235974Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:34:39.239159 containerd[1956]: time="2025-03-19T11:34:39.239075250Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 19 11:34:39.241780 containerd[1956]: time="2025-03-19T11:34:39.241704486Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:34:39.245430 containerd[1956]: time="2025-03-19T11:34:39.245140182Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.335833846s" Mar 19 11:34:39.245430 containerd[1956]: time="2025-03-19T11:34:39.245203926Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 19 11:34:39.248391 containerd[1956]: time="2025-03-19T11:34:39.247561662Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 19 11:34:39.252696 containerd[1956]: time="2025-03-19T11:34:39.252003906Z" level=info msg="CreateContainer within sandbox \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 19 11:34:39.279678 containerd[1956]: time="2025-03-19T11:34:39.279623586Z" level=info msg="CreateContainer within sandbox \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1ee82c1a76ddfe1d268ddcc388a69e28dc5b288156a90bb04f0ab5c21bff5436\"" Mar 19 11:34:39.280724 containerd[1956]: time="2025-03-19T11:34:39.280673910Z" level=info msg="StartContainer for \"1ee82c1a76ddfe1d268ddcc388a69e28dc5b288156a90bb04f0ab5c21bff5436\"" Mar 19 11:34:39.339684 systemd[1]: Started cri-containerd-1ee82c1a76ddfe1d268ddcc388a69e28dc5b288156a90bb04f0ab5c21bff5436.scope - libcontainer container 1ee82c1a76ddfe1d268ddcc388a69e28dc5b288156a90bb04f0ab5c21bff5436. Mar 19 11:34:39.387658 containerd[1956]: time="2025-03-19T11:34:39.387259207Z" level=info msg="StartContainer for \"1ee82c1a76ddfe1d268ddcc388a69e28dc5b288156a90bb04f0ab5c21bff5436\" returns successfully" Mar 19 11:34:39.409213 systemd[1]: cri-containerd-1ee82c1a76ddfe1d268ddcc388a69e28dc5b288156a90bb04f0ab5c21bff5436.scope: Deactivated successfully. Mar 19 11:34:39.465105 kubelet[3329]: I0319 11:34:39.464772 3329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-thggl" podStartSLOduration=11.464748355 podStartE2EDuration="11.464748355s" podCreationTimestamp="2025-03-19 11:34:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:34:31.439736603 +0000 UTC m=+8.387757006" watchObservedRunningTime="2025-03-19 11:34:39.464748355 +0000 UTC m=+16.412768746" Mar 19 11:34:39.472644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ee82c1a76ddfe1d268ddcc388a69e28dc5b288156a90bb04f0ab5c21bff5436-rootfs.mount: Deactivated successfully. Mar 19 11:34:40.457093 containerd[1956]: time="2025-03-19T11:34:40.456995168Z" level=info msg="shim disconnected" id=1ee82c1a76ddfe1d268ddcc388a69e28dc5b288156a90bb04f0ab5c21bff5436 namespace=k8s.io Mar 19 11:34:40.458045 containerd[1956]: time="2025-03-19T11:34:40.457799156Z" level=warning msg="cleaning up after shim disconnected" id=1ee82c1a76ddfe1d268ddcc388a69e28dc5b288156a90bb04f0ab5c21bff5436 namespace=k8s.io Mar 19 11:34:40.458045 containerd[1956]: time="2025-03-19T11:34:40.457835996Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:34:41.442388 containerd[1956]: time="2025-03-19T11:34:41.441148581Z" level=info msg="CreateContainer within sandbox \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 19 11:34:41.508797 containerd[1956]: time="2025-03-19T11:34:41.508622673Z" level=info msg="CreateContainer within sandbox \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c211e21bfc20983346f292f119d9d2af606aa6a8f25f6d684886d9ef5bb943cf\"" Mar 19 11:34:41.513479 containerd[1956]: time="2025-03-19T11:34:41.509905653Z" level=info msg="StartContainer for \"c211e21bfc20983346f292f119d9d2af606aa6a8f25f6d684886d9ef5bb943cf\"" Mar 19 11:34:41.571641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1754970967.mount: Deactivated successfully. Mar 19 11:34:41.613688 systemd[1]: Started cri-containerd-c211e21bfc20983346f292f119d9d2af606aa6a8f25f6d684886d9ef5bb943cf.scope - libcontainer container c211e21bfc20983346f292f119d9d2af606aa6a8f25f6d684886d9ef5bb943cf. Mar 19 11:34:41.707156 containerd[1956]: time="2025-03-19T11:34:41.706948630Z" level=info msg="StartContainer for \"c211e21bfc20983346f292f119d9d2af606aa6a8f25f6d684886d9ef5bb943cf\" returns successfully" Mar 19 11:34:41.734145 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 11:34:41.734706 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:34:41.736177 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:34:41.747282 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:34:41.753728 systemd[1]: cri-containerd-c211e21bfc20983346f292f119d9d2af606aa6a8f25f6d684886d9ef5bb943cf.scope: Deactivated successfully. Mar 19 11:34:41.800153 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:34:41.842068 containerd[1956]: time="2025-03-19T11:34:41.841993199Z" level=info msg="shim disconnected" id=c211e21bfc20983346f292f119d9d2af606aa6a8f25f6d684886d9ef5bb943cf namespace=k8s.io Mar 19 11:34:41.842719 containerd[1956]: time="2025-03-19T11:34:41.842682851Z" level=warning msg="cleaning up after shim disconnected" id=c211e21bfc20983346f292f119d9d2af606aa6a8f25f6d684886d9ef5bb943cf namespace=k8s.io Mar 19 11:34:41.843266 containerd[1956]: time="2025-03-19T11:34:41.843016451Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:34:42.323890 containerd[1956]: time="2025-03-19T11:34:42.323834193Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:34:42.325854 containerd[1956]: time="2025-03-19T11:34:42.325789665Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 19 11:34:42.328306 containerd[1956]: time="2025-03-19T11:34:42.328222245Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:34:42.331436 containerd[1956]: time="2025-03-19T11:34:42.331110153Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.082913895s" Mar 19 11:34:42.331436 containerd[1956]: time="2025-03-19T11:34:42.331165653Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 19 11:34:42.337263 containerd[1956]: time="2025-03-19T11:34:42.337161117Z" level=info msg="CreateContainer within sandbox \"01ffe4621a1515764783d6b96c462118bf085b576cfb6961b62b4676c2616541\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 19 11:34:42.363576 containerd[1956]: time="2025-03-19T11:34:42.363493618Z" level=info msg="CreateContainer within sandbox \"01ffe4621a1515764783d6b96c462118bf085b576cfb6961b62b4676c2616541\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d9c2c72e824e7bd5be999c4559393bdb181881f1e4725e9b63accab2105c8dbe\"" Mar 19 11:34:42.365430 containerd[1956]: time="2025-03-19T11:34:42.365283322Z" level=info msg="StartContainer for \"d9c2c72e824e7bd5be999c4559393bdb181881f1e4725e9b63accab2105c8dbe\"" Mar 19 11:34:42.407944 systemd[1]: Started cri-containerd-d9c2c72e824e7bd5be999c4559393bdb181881f1e4725e9b63accab2105c8dbe.scope - libcontainer container d9c2c72e824e7bd5be999c4559393bdb181881f1e4725e9b63accab2105c8dbe. Mar 19 11:34:42.456366 containerd[1956]: time="2025-03-19T11:34:42.455524762Z" level=info msg="CreateContainer within sandbox \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 19 11:34:42.478696 containerd[1956]: time="2025-03-19T11:34:42.478626274Z" level=info msg="StartContainer for \"d9c2c72e824e7bd5be999c4559393bdb181881f1e4725e9b63accab2105c8dbe\" returns successfully" Mar 19 11:34:42.484285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c211e21bfc20983346f292f119d9d2af606aa6a8f25f6d684886d9ef5bb943cf-rootfs.mount: Deactivated successfully. Mar 19 11:34:42.506942 containerd[1956]: time="2025-03-19T11:34:42.506857738Z" level=info msg="CreateContainer within sandbox \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f8dc80383846e6a966e83b0ba5996b2663c2ccb8822492733cb9388300f9cd6e\"" Mar 19 11:34:42.507637 containerd[1956]: time="2025-03-19T11:34:42.507580954Z" level=info msg="StartContainer for \"f8dc80383846e6a966e83b0ba5996b2663c2ccb8822492733cb9388300f9cd6e\"" Mar 19 11:34:42.593029 systemd[1]: Started cri-containerd-f8dc80383846e6a966e83b0ba5996b2663c2ccb8822492733cb9388300f9cd6e.scope - libcontainer container f8dc80383846e6a966e83b0ba5996b2663c2ccb8822492733cb9388300f9cd6e. Mar 19 11:34:42.663112 containerd[1956]: time="2025-03-19T11:34:42.662817035Z" level=info msg="StartContainer for \"f8dc80383846e6a966e83b0ba5996b2663c2ccb8822492733cb9388300f9cd6e\" returns successfully" Mar 19 11:34:42.672779 systemd[1]: cri-containerd-f8dc80383846e6a966e83b0ba5996b2663c2ccb8822492733cb9388300f9cd6e.scope: Deactivated successfully. Mar 19 11:34:42.824913 containerd[1956]: time="2025-03-19T11:34:42.824827092Z" level=info msg="shim disconnected" id=f8dc80383846e6a966e83b0ba5996b2663c2ccb8822492733cb9388300f9cd6e namespace=k8s.io Mar 19 11:34:42.824913 containerd[1956]: time="2025-03-19T11:34:42.824903148Z" level=warning msg="cleaning up after shim disconnected" id=f8dc80383846e6a966e83b0ba5996b2663c2ccb8822492733cb9388300f9cd6e namespace=k8s.io Mar 19 11:34:42.824913 containerd[1956]: time="2025-03-19T11:34:42.824923560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:34:43.477630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8dc80383846e6a966e83b0ba5996b2663c2ccb8822492733cb9388300f9cd6e-rootfs.mount: Deactivated successfully. Mar 19 11:34:43.490143 containerd[1956]: time="2025-03-19T11:34:43.489962123Z" level=info msg="CreateContainer within sandbox \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 19 11:34:43.531794 containerd[1956]: time="2025-03-19T11:34:43.531726479Z" level=info msg="CreateContainer within sandbox \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997\"" Mar 19 11:34:43.533231 containerd[1956]: time="2025-03-19T11:34:43.532844567Z" level=info msg="StartContainer for \"b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997\"" Mar 19 11:34:43.686656 systemd[1]: Started cri-containerd-b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997.scope - libcontainer container b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997. Mar 19 11:34:43.792464 kubelet[3329]: I0319 11:34:43.790163 3329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-xblz2" podStartSLOduration=2.534409069 podStartE2EDuration="14.790141477s" podCreationTimestamp="2025-03-19 11:34:29 +0000 UTC" firstStartedPulling="2025-03-19 11:34:30.077019573 +0000 UTC m=+7.025039976" lastFinishedPulling="2025-03-19 11:34:42.332751993 +0000 UTC m=+19.280772384" observedRunningTime="2025-03-19 11:34:43.557655012 +0000 UTC m=+20.505675427" watchObservedRunningTime="2025-03-19 11:34:43.790141477 +0000 UTC m=+20.738161868" Mar 19 11:34:43.822706 systemd[1]: cri-containerd-b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997.scope: Deactivated successfully. Mar 19 11:34:43.829469 containerd[1956]: time="2025-03-19T11:34:43.824554129Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3886f5b_16a9_404c_a86e_8e60ef9ee59b.slice/cri-containerd-b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997.scope/memory.events\": no such file or directory" Mar 19 11:34:43.838680 containerd[1956]: time="2025-03-19T11:34:43.838606957Z" level=info msg="StartContainer for \"b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997\" returns successfully" Mar 19 11:34:43.916950 containerd[1956]: time="2025-03-19T11:34:43.916615057Z" level=info msg="shim disconnected" id=b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997 namespace=k8s.io Mar 19 11:34:43.916950 containerd[1956]: time="2025-03-19T11:34:43.916694161Z" level=warning msg="cleaning up after shim disconnected" id=b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997 namespace=k8s.io Mar 19 11:34:43.916950 containerd[1956]: time="2025-03-19T11:34:43.916714597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:34:44.474678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997-rootfs.mount: Deactivated successfully. Mar 19 11:34:44.499110 containerd[1956]: time="2025-03-19T11:34:44.497968680Z" level=info msg="CreateContainer within sandbox \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 19 11:34:44.544244 containerd[1956]: time="2025-03-19T11:34:44.544168884Z" level=info msg="CreateContainer within sandbox \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc\"" Mar 19 11:34:44.546740 containerd[1956]: time="2025-03-19T11:34:44.546564276Z" level=info msg="StartContainer for \"e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc\"" Mar 19 11:34:44.607683 systemd[1]: Started cri-containerd-e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc.scope - libcontainer container e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc. Mar 19 11:34:44.673362 containerd[1956]: time="2025-03-19T11:34:44.673176145Z" level=info msg="StartContainer for \"e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc\" returns successfully" Mar 19 11:34:44.863104 kubelet[3329]: I0319 11:34:44.862956 3329 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 19 11:34:44.941419 systemd[1]: Created slice kubepods-burstable-pod6e571f57_0431_42f3_8ad3_6460ebfba7bb.slice - libcontainer container kubepods-burstable-pod6e571f57_0431_42f3_8ad3_6460ebfba7bb.slice. Mar 19 11:34:44.958534 systemd[1]: Created slice kubepods-burstable-podd191a18f_75e7_410c_99ee_c82e8aa10378.slice - libcontainer container kubepods-burstable-podd191a18f_75e7_410c_99ee_c82e8aa10378.slice. Mar 19 11:34:44.976007 kubelet[3329]: I0319 11:34:44.975722 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e571f57-0431-42f3-8ad3-6460ebfba7bb-config-volume\") pod \"coredns-6f6b679f8f-xlb9c\" (UID: \"6e571f57-0431-42f3-8ad3-6460ebfba7bb\") " pod="kube-system/coredns-6f6b679f8f-xlb9c" Mar 19 11:34:44.976007 kubelet[3329]: I0319 11:34:44.975791 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d191a18f-75e7-410c-99ee-c82e8aa10378-config-volume\") pod \"coredns-6f6b679f8f-84zpd\" (UID: \"d191a18f-75e7-410c-99ee-c82e8aa10378\") " pod="kube-system/coredns-6f6b679f8f-84zpd" Mar 19 11:34:44.976007 kubelet[3329]: I0319 11:34:44.975847 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkpvx\" (UniqueName: \"kubernetes.io/projected/d191a18f-75e7-410c-99ee-c82e8aa10378-kube-api-access-fkpvx\") pod \"coredns-6f6b679f8f-84zpd\" (UID: \"d191a18f-75e7-410c-99ee-c82e8aa10378\") " pod="kube-system/coredns-6f6b679f8f-84zpd" Mar 19 11:34:44.976007 kubelet[3329]: I0319 11:34:44.975889 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jzxq\" (UniqueName: \"kubernetes.io/projected/6e571f57-0431-42f3-8ad3-6460ebfba7bb-kube-api-access-2jzxq\") pod \"coredns-6f6b679f8f-xlb9c\" (UID: \"6e571f57-0431-42f3-8ad3-6460ebfba7bb\") " pod="kube-system/coredns-6f6b679f8f-xlb9c" Mar 19 11:34:45.255408 containerd[1956]: time="2025-03-19T11:34:45.255326676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xlb9c,Uid:6e571f57-0431-42f3-8ad3-6460ebfba7bb,Namespace:kube-system,Attempt:0,}" Mar 19 11:34:45.270173 containerd[1956]: time="2025-03-19T11:34:45.270101604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-84zpd,Uid:d191a18f-75e7-410c-99ee-c82e8aa10378,Namespace:kube-system,Attempt:0,}" Mar 19 11:34:45.543863 kubelet[3329]: I0319 11:34:45.542997 3329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ttv96" podStartSLOduration=8.203771823 podStartE2EDuration="17.542971741s" podCreationTimestamp="2025-03-19 11:34:28 +0000 UTC" firstStartedPulling="2025-03-19 11:34:29.908074476 +0000 UTC m=+6.856094855" lastFinishedPulling="2025-03-19 11:34:39.247274382 +0000 UTC m=+16.195294773" observedRunningTime="2025-03-19 11:34:45.541087837 +0000 UTC m=+22.489108240" watchObservedRunningTime="2025-03-19 11:34:45.542971741 +0000 UTC m=+22.490992144" Mar 19 11:34:47.599606 systemd-networkd[1848]: cilium_host: Link UP Mar 19 11:34:47.600955 (udev-worker)[4127]: Network interface NamePolicy= disabled on kernel command line. Mar 19 11:34:47.603407 (udev-worker)[4128]: Network interface NamePolicy= disabled on kernel command line. Mar 19 11:34:47.604303 systemd-networkd[1848]: cilium_net: Link UP Mar 19 11:34:47.604328 systemd-networkd[1848]: cilium_net: Gained carrier Mar 19 11:34:47.604818 systemd-networkd[1848]: cilium_host: Gained carrier Mar 19 11:34:47.605358 systemd-networkd[1848]: cilium_host: Gained IPv6LL Mar 19 11:34:47.762582 (udev-worker)[4172]: Network interface NamePolicy= disabled on kernel command line. Mar 19 11:34:47.775025 systemd-networkd[1848]: cilium_vxlan: Link UP Mar 19 11:34:47.775040 systemd-networkd[1848]: cilium_vxlan: Gained carrier Mar 19 11:34:48.253393 kernel: NET: Registered PF_ALG protocol family Mar 19 11:34:48.493550 systemd-networkd[1848]: cilium_net: Gained IPv6LL Mar 19 11:34:49.389692 systemd-networkd[1848]: cilium_vxlan: Gained IPv6LL Mar 19 11:34:49.538229 systemd-networkd[1848]: lxc_health: Link UP Mar 19 11:34:49.557453 systemd-networkd[1848]: lxc_health: Gained carrier Mar 19 11:34:49.906388 kernel: eth0: renamed from tmp132aa Mar 19 11:34:49.906668 systemd-networkd[1848]: lxc43e3560b3c6e: Link UP Mar 19 11:34:49.914876 (udev-worker)[4173]: Network interface NamePolicy= disabled on kernel command line. Mar 19 11:34:49.937525 kernel: eth0: renamed from tmp0a725 Mar 19 11:34:49.950442 systemd-networkd[1848]: lxc43e3560b3c6e: Gained carrier Mar 19 11:34:49.955723 systemd-networkd[1848]: lxcd6824cb7b558: Link UP Mar 19 11:34:49.959820 systemd-networkd[1848]: lxcd6824cb7b558: Gained carrier Mar 19 11:34:51.310127 systemd-networkd[1848]: lxc_health: Gained IPv6LL Mar 19 11:34:51.821745 systemd-networkd[1848]: lxc43e3560b3c6e: Gained IPv6LL Mar 19 11:34:51.950014 systemd-networkd[1848]: lxcd6824cb7b558: Gained IPv6LL Mar 19 11:34:54.706566 ntpd[1924]: Listen normally on 8 cilium_host 192.168.0.80:123 Mar 19 11:34:54.707840 ntpd[1924]: 19 Mar 11:34:54 ntpd[1924]: Listen normally on 8 cilium_host 192.168.0.80:123 Mar 19 11:34:54.707840 ntpd[1924]: 19 Mar 11:34:54 ntpd[1924]: Listen normally on 9 cilium_net [fe80::487:28ff:fe42:eae6%4]:123 Mar 19 11:34:54.707840 ntpd[1924]: 19 Mar 11:34:54 ntpd[1924]: Listen normally on 10 cilium_host [fe80::d087:85ff:fe2d:e8dd%5]:123 Mar 19 11:34:54.707840 ntpd[1924]: 19 Mar 11:34:54 ntpd[1924]: Listen normally on 11 cilium_vxlan [fe80::f41b:f4ff:fe0d:bf19%6]:123 Mar 19 11:34:54.707840 ntpd[1924]: 19 Mar 11:34:54 ntpd[1924]: Listen normally on 12 lxc_health [fe80::d0c8:c1ff:fe43:1271%8]:123 Mar 19 11:34:54.707840 ntpd[1924]: 19 Mar 11:34:54 ntpd[1924]: Listen normally on 13 lxc43e3560b3c6e [fe80::488e:54ff:fe8f:eadb%10]:123 Mar 19 11:34:54.707840 ntpd[1924]: 19 Mar 11:34:54 ntpd[1924]: Listen normally on 14 lxcd6824cb7b558 [fe80::d4df:7bff:feb9:5d3b%12]:123 Mar 19 11:34:54.706722 ntpd[1924]: Listen normally on 9 cilium_net [fe80::487:28ff:fe42:eae6%4]:123 Mar 19 11:34:54.706806 ntpd[1924]: Listen normally on 10 cilium_host [fe80::d087:85ff:fe2d:e8dd%5]:123 Mar 19 11:34:54.706872 ntpd[1924]: Listen normally on 11 cilium_vxlan [fe80::f41b:f4ff:fe0d:bf19%6]:123 Mar 19 11:34:54.706938 ntpd[1924]: Listen normally on 12 lxc_health [fe80::d0c8:c1ff:fe43:1271%8]:123 Mar 19 11:34:54.707004 ntpd[1924]: Listen normally on 13 lxc43e3560b3c6e [fe80::488e:54ff:fe8f:eadb%10]:123 Mar 19 11:34:54.707075 ntpd[1924]: Listen normally on 14 lxcd6824cb7b558 [fe80::d4df:7bff:feb9:5d3b%12]:123 Mar 19 11:34:58.133289 containerd[1956]: time="2025-03-19T11:34:58.133059552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:34:58.134284 containerd[1956]: time="2025-03-19T11:34:58.133192116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:34:58.134284 containerd[1956]: time="2025-03-19T11:34:58.134167704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:58.137284 containerd[1956]: time="2025-03-19T11:34:58.137163528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:58.211684 systemd[1]: Started cri-containerd-0a725f38d41c3d285db78ef48db5b119c12d7de1a240e66caddf5bfa2703280e.scope - libcontainer container 0a725f38d41c3d285db78ef48db5b119c12d7de1a240e66caddf5bfa2703280e. Mar 19 11:34:58.226823 containerd[1956]: time="2025-03-19T11:34:58.225884844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:34:58.226823 containerd[1956]: time="2025-03-19T11:34:58.226001184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:34:58.226823 containerd[1956]: time="2025-03-19T11:34:58.226037496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:58.226823 containerd[1956]: time="2025-03-19T11:34:58.226201200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:58.289673 systemd[1]: Started cri-containerd-132aa4b0ff0b5aad727db7a7ceea239a032b188bf1e311505d2fd36e366a18b4.scope - libcontainer container 132aa4b0ff0b5aad727db7a7ceea239a032b188bf1e311505d2fd36e366a18b4. Mar 19 11:34:58.355456 containerd[1956]: time="2025-03-19T11:34:58.355297645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-84zpd,Uid:d191a18f-75e7-410c-99ee-c82e8aa10378,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a725f38d41c3d285db78ef48db5b119c12d7de1a240e66caddf5bfa2703280e\"" Mar 19 11:34:58.365941 containerd[1956]: time="2025-03-19T11:34:58.365875309Z" level=info msg="CreateContainer within sandbox \"0a725f38d41c3d285db78ef48db5b119c12d7de1a240e66caddf5bfa2703280e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:34:58.410914 containerd[1956]: time="2025-03-19T11:34:58.410714461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xlb9c,Uid:6e571f57-0431-42f3-8ad3-6460ebfba7bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"132aa4b0ff0b5aad727db7a7ceea239a032b188bf1e311505d2fd36e366a18b4\"" Mar 19 11:34:58.411444 containerd[1956]: time="2025-03-19T11:34:58.411319609Z" level=info msg="CreateContainer within sandbox \"0a725f38d41c3d285db78ef48db5b119c12d7de1a240e66caddf5bfa2703280e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e36cad2d0134958141776f66f15683637a2738cb82a1c5f9d33927573c781ae9\"" Mar 19 11:34:58.414264 containerd[1956]: time="2025-03-19T11:34:58.414191689Z" level=info msg="StartContainer for \"e36cad2d0134958141776f66f15683637a2738cb82a1c5f9d33927573c781ae9\"" Mar 19 11:34:58.426245 containerd[1956]: time="2025-03-19T11:34:58.424460257Z" level=info msg="CreateContainer within sandbox \"132aa4b0ff0b5aad727db7a7ceea239a032b188bf1e311505d2fd36e366a18b4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:34:58.473487 containerd[1956]: time="2025-03-19T11:34:58.473151926Z" level=info msg="CreateContainer within sandbox \"132aa4b0ff0b5aad727db7a7ceea239a032b188bf1e311505d2fd36e366a18b4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f05a9ea39ea152c9b5a04f87bfd23fb35989cbb2c88311149f0acb441145c4bf\"" Mar 19 11:34:58.477202 containerd[1956]: time="2025-03-19T11:34:58.477066386Z" level=info msg="StartContainer for \"f05a9ea39ea152c9b5a04f87bfd23fb35989cbb2c88311149f0acb441145c4bf\"" Mar 19 11:34:58.527590 systemd[1]: Started cri-containerd-e36cad2d0134958141776f66f15683637a2738cb82a1c5f9d33927573c781ae9.scope - libcontainer container e36cad2d0134958141776f66f15683637a2738cb82a1c5f9d33927573c781ae9. Mar 19 11:34:58.625046 systemd[1]: Started cri-containerd-f05a9ea39ea152c9b5a04f87bfd23fb35989cbb2c88311149f0acb441145c4bf.scope - libcontainer container f05a9ea39ea152c9b5a04f87bfd23fb35989cbb2c88311149f0acb441145c4bf. Mar 19 11:34:58.680373 containerd[1956]: time="2025-03-19T11:34:58.679650555Z" level=info msg="StartContainer for \"e36cad2d0134958141776f66f15683637a2738cb82a1c5f9d33927573c781ae9\" returns successfully" Mar 19 11:34:58.714142 containerd[1956]: time="2025-03-19T11:34:58.714063267Z" level=info msg="StartContainer for \"f05a9ea39ea152c9b5a04f87bfd23fb35989cbb2c88311149f0acb441145c4bf\" returns successfully" Mar 19 11:34:59.615600 kubelet[3329]: I0319 11:34:59.614855 3329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-xlb9c" podStartSLOduration=30.614830287 podStartE2EDuration="30.614830287s" podCreationTimestamp="2025-03-19 11:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:34:59.590106723 +0000 UTC m=+36.538127138" watchObservedRunningTime="2025-03-19 11:34:59.614830287 +0000 UTC m=+36.562850678" Mar 19 11:34:59.637890 kubelet[3329]: I0319 11:34:59.637795 3329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-84zpd" podStartSLOduration=30.637769511 podStartE2EDuration="30.637769511s" podCreationTimestamp="2025-03-19 11:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:34:59.634668003 +0000 UTC m=+36.582688430" watchObservedRunningTime="2025-03-19 11:34:59.637769511 +0000 UTC m=+36.585789902" Mar 19 11:35:01.701898 systemd[1]: Started sshd@9-172.31.31.152:22-139.178.68.195:57390.service - OpenSSH per-connection server daemon (139.178.68.195:57390). Mar 19 11:35:01.894050 sshd[4703]: Accepted publickey for core from 139.178.68.195 port 57390 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:01.897511 sshd-session[4703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:01.910091 systemd-logind[1931]: New session 10 of user core. Mar 19 11:35:01.919632 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 19 11:35:02.193865 sshd[4705]: Connection closed by 139.178.68.195 port 57390 Mar 19 11:35:02.194921 sshd-session[4703]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:02.201738 systemd[1]: sshd@9-172.31.31.152:22-139.178.68.195:57390.service: Deactivated successfully. Mar 19 11:35:02.205932 systemd[1]: session-10.scope: Deactivated successfully. Mar 19 11:35:02.207655 systemd-logind[1931]: Session 10 logged out. Waiting for processes to exit. Mar 19 11:35:02.209488 systemd-logind[1931]: Removed session 10. Mar 19 11:35:07.240800 systemd[1]: Started sshd@10-172.31.31.152:22-139.178.68.195:48482.service - OpenSSH per-connection server daemon (139.178.68.195:48482). Mar 19 11:35:07.423107 sshd[4719]: Accepted publickey for core from 139.178.68.195 port 48482 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:07.426128 sshd-session[4719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:07.434984 systemd-logind[1931]: New session 11 of user core. Mar 19 11:35:07.440607 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 19 11:35:07.686869 sshd[4721]: Connection closed by 139.178.68.195 port 48482 Mar 19 11:35:07.687757 sshd-session[4719]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:07.694522 systemd-logind[1931]: Session 11 logged out. Waiting for processes to exit. Mar 19 11:35:07.694626 systemd[1]: sshd@10-172.31.31.152:22-139.178.68.195:48482.service: Deactivated successfully. Mar 19 11:35:07.698288 systemd[1]: session-11.scope: Deactivated successfully. Mar 19 11:35:07.701296 systemd-logind[1931]: Removed session 11. Mar 19 11:35:12.731799 systemd[1]: Started sshd@11-172.31.31.152:22-139.178.68.195:48488.service - OpenSSH per-connection server daemon (139.178.68.195:48488). Mar 19 11:35:12.909278 sshd[4734]: Accepted publickey for core from 139.178.68.195 port 48488 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:12.912185 sshd-session[4734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:12.920787 systemd-logind[1931]: New session 12 of user core. Mar 19 11:35:12.930592 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 19 11:35:13.172127 sshd[4736]: Connection closed by 139.178.68.195 port 48488 Mar 19 11:35:13.173242 sshd-session[4734]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:13.178262 systemd-logind[1931]: Session 12 logged out. Waiting for processes to exit. Mar 19 11:35:13.181105 systemd[1]: sshd@11-172.31.31.152:22-139.178.68.195:48488.service: Deactivated successfully. Mar 19 11:35:13.185134 systemd[1]: session-12.scope: Deactivated successfully. Mar 19 11:35:13.188981 systemd-logind[1931]: Removed session 12. Mar 19 11:35:18.216902 systemd[1]: Started sshd@12-172.31.31.152:22-139.178.68.195:60540.service - OpenSSH per-connection server daemon (139.178.68.195:60540). Mar 19 11:35:18.405051 sshd[4750]: Accepted publickey for core from 139.178.68.195 port 60540 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:18.407536 sshd-session[4750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:18.416133 systemd-logind[1931]: New session 13 of user core. Mar 19 11:35:18.428620 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 19 11:35:18.671891 sshd[4752]: Connection closed by 139.178.68.195 port 60540 Mar 19 11:35:18.672450 sshd-session[4750]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:18.680129 systemd[1]: sshd@12-172.31.31.152:22-139.178.68.195:60540.service: Deactivated successfully. Mar 19 11:35:18.686627 systemd[1]: session-13.scope: Deactivated successfully. Mar 19 11:35:18.688176 systemd-logind[1931]: Session 13 logged out. Waiting for processes to exit. Mar 19 11:35:18.689982 systemd-logind[1931]: Removed session 13. Mar 19 11:35:18.718862 systemd[1]: Started sshd@13-172.31.31.152:22-139.178.68.195:60552.service - OpenSSH per-connection server daemon (139.178.68.195:60552). Mar 19 11:35:18.902870 sshd[4765]: Accepted publickey for core from 139.178.68.195 port 60552 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:18.905559 sshd-session[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:18.913494 systemd-logind[1931]: New session 14 of user core. Mar 19 11:35:18.920616 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 19 11:35:19.236878 sshd[4767]: Connection closed by 139.178.68.195 port 60552 Mar 19 11:35:19.235287 sshd-session[4765]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:19.243763 systemd[1]: sshd@13-172.31.31.152:22-139.178.68.195:60552.service: Deactivated successfully. Mar 19 11:35:19.249301 systemd[1]: session-14.scope: Deactivated successfully. Mar 19 11:35:19.259829 systemd-logind[1931]: Session 14 logged out. Waiting for processes to exit. Mar 19 11:35:19.298377 systemd[1]: Started sshd@14-172.31.31.152:22-139.178.68.195:60568.service - OpenSSH per-connection server daemon (139.178.68.195:60568). Mar 19 11:35:19.302645 systemd-logind[1931]: Removed session 14. Mar 19 11:35:19.492584 sshd[4776]: Accepted publickey for core from 139.178.68.195 port 60568 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:19.494818 sshd-session[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:19.503567 systemd-logind[1931]: New session 15 of user core. Mar 19 11:35:19.511621 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 19 11:35:19.751433 sshd[4779]: Connection closed by 139.178.68.195 port 60568 Mar 19 11:35:19.752722 sshd-session[4776]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:19.759059 systemd[1]: sshd@14-172.31.31.152:22-139.178.68.195:60568.service: Deactivated successfully. Mar 19 11:35:19.763940 systemd[1]: session-15.scope: Deactivated successfully. Mar 19 11:35:19.766473 systemd-logind[1931]: Session 15 logged out. Waiting for processes to exit. Mar 19 11:35:19.768394 systemd-logind[1931]: Removed session 15. Mar 19 11:35:24.796882 systemd[1]: Started sshd@15-172.31.31.152:22-139.178.68.195:60574.service - OpenSSH per-connection server daemon (139.178.68.195:60574). Mar 19 11:35:24.983311 sshd[4794]: Accepted publickey for core from 139.178.68.195 port 60574 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:24.985791 sshd-session[4794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:24.994532 systemd-logind[1931]: New session 16 of user core. Mar 19 11:35:25.003588 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 19 11:35:25.245984 sshd[4796]: Connection closed by 139.178.68.195 port 60574 Mar 19 11:35:25.247008 sshd-session[4794]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:25.253485 systemd[1]: sshd@15-172.31.31.152:22-139.178.68.195:60574.service: Deactivated successfully. Mar 19 11:35:25.259999 systemd[1]: session-16.scope: Deactivated successfully. Mar 19 11:35:25.262652 systemd-logind[1931]: Session 16 logged out. Waiting for processes to exit. Mar 19 11:35:25.264507 systemd-logind[1931]: Removed session 16. Mar 19 11:35:30.293876 systemd[1]: Started sshd@16-172.31.31.152:22-139.178.68.195:46478.service - OpenSSH per-connection server daemon (139.178.68.195:46478). Mar 19 11:35:30.476397 sshd[4810]: Accepted publickey for core from 139.178.68.195 port 46478 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:30.478809 sshd-session[4810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:30.487626 systemd-logind[1931]: New session 17 of user core. Mar 19 11:35:30.494586 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 19 11:35:30.746196 sshd[4812]: Connection closed by 139.178.68.195 port 46478 Mar 19 11:35:30.745985 sshd-session[4810]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:30.751910 systemd[1]: sshd@16-172.31.31.152:22-139.178.68.195:46478.service: Deactivated successfully. Mar 19 11:35:30.756284 systemd[1]: session-17.scope: Deactivated successfully. Mar 19 11:35:30.760064 systemd-logind[1931]: Session 17 logged out. Waiting for processes to exit. Mar 19 11:35:30.762637 systemd-logind[1931]: Removed session 17. Mar 19 11:35:35.791515 systemd[1]: Started sshd@17-172.31.31.152:22-139.178.68.195:50886.service - OpenSSH per-connection server daemon (139.178.68.195:50886). Mar 19 11:35:35.982653 sshd[4827]: Accepted publickey for core from 139.178.68.195 port 50886 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:35.985064 sshd-session[4827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:35.993713 systemd-logind[1931]: New session 18 of user core. Mar 19 11:35:35.999640 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 19 11:35:36.247577 sshd[4829]: Connection closed by 139.178.68.195 port 50886 Mar 19 11:35:36.248687 sshd-session[4827]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:36.254896 systemd[1]: sshd@17-172.31.31.152:22-139.178.68.195:50886.service: Deactivated successfully. Mar 19 11:35:36.260287 systemd[1]: session-18.scope: Deactivated successfully. Mar 19 11:35:36.263040 systemd-logind[1931]: Session 18 logged out. Waiting for processes to exit. Mar 19 11:35:36.265203 systemd-logind[1931]: Removed session 18. Mar 19 11:35:41.292980 systemd[1]: Started sshd@18-172.31.31.152:22-139.178.68.195:50894.service - OpenSSH per-connection server daemon (139.178.68.195:50894). Mar 19 11:35:41.476059 sshd[4841]: Accepted publickey for core from 139.178.68.195 port 50894 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:41.478786 sshd-session[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:41.488939 systemd-logind[1931]: New session 19 of user core. Mar 19 11:35:41.496738 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 19 11:35:41.740112 sshd[4843]: Connection closed by 139.178.68.195 port 50894 Mar 19 11:35:41.741418 sshd-session[4841]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:41.746637 systemd[1]: sshd@18-172.31.31.152:22-139.178.68.195:50894.service: Deactivated successfully. Mar 19 11:35:41.750894 systemd[1]: session-19.scope: Deactivated successfully. Mar 19 11:35:41.755007 systemd-logind[1931]: Session 19 logged out. Waiting for processes to exit. Mar 19 11:35:41.757566 systemd-logind[1931]: Removed session 19. Mar 19 11:35:41.782854 systemd[1]: Started sshd@19-172.31.31.152:22-139.178.68.195:50896.service - OpenSSH per-connection server daemon (139.178.68.195:50896). Mar 19 11:35:41.975426 sshd[4855]: Accepted publickey for core from 139.178.68.195 port 50896 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:41.977853 sshd-session[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:41.986445 systemd-logind[1931]: New session 20 of user core. Mar 19 11:35:41.997619 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 19 11:35:42.299966 sshd[4857]: Connection closed by 139.178.68.195 port 50896 Mar 19 11:35:42.300486 sshd-session[4855]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:42.307791 systemd[1]: sshd@19-172.31.31.152:22-139.178.68.195:50896.service: Deactivated successfully. Mar 19 11:35:42.312763 systemd[1]: session-20.scope: Deactivated successfully. Mar 19 11:35:42.314423 systemd-logind[1931]: Session 20 logged out. Waiting for processes to exit. Mar 19 11:35:42.316208 systemd-logind[1931]: Removed session 20. Mar 19 11:35:42.341962 systemd[1]: Started sshd@20-172.31.31.152:22-139.178.68.195:50902.service - OpenSSH per-connection server daemon (139.178.68.195:50902). Mar 19 11:35:42.518291 sshd[4867]: Accepted publickey for core from 139.178.68.195 port 50902 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:42.520843 sshd-session[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:42.528677 systemd-logind[1931]: New session 21 of user core. Mar 19 11:35:42.537795 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 19 11:35:45.179569 sshd[4869]: Connection closed by 139.178.68.195 port 50902 Mar 19 11:35:45.181615 sshd-session[4867]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:45.190648 systemd[1]: sshd@20-172.31.31.152:22-139.178.68.195:50902.service: Deactivated successfully. Mar 19 11:35:45.199222 systemd[1]: session-21.scope: Deactivated successfully. Mar 19 11:35:45.202533 systemd-logind[1931]: Session 21 logged out. Waiting for processes to exit. Mar 19 11:35:45.233819 systemd[1]: Started sshd@21-172.31.31.152:22-139.178.68.195:50910.service - OpenSSH per-connection server daemon (139.178.68.195:50910). Mar 19 11:35:45.237087 systemd-logind[1931]: Removed session 21. Mar 19 11:35:45.415901 sshd[4885]: Accepted publickey for core from 139.178.68.195 port 50910 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:45.419001 sshd-session[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:45.430087 systemd-logind[1931]: New session 22 of user core. Mar 19 11:35:45.436697 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 19 11:35:45.928995 sshd[4888]: Connection closed by 139.178.68.195 port 50910 Mar 19 11:35:45.929406 sshd-session[4885]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:45.936683 systemd-logind[1931]: Session 22 logged out. Waiting for processes to exit. Mar 19 11:35:45.937571 systemd[1]: sshd@21-172.31.31.152:22-139.178.68.195:50910.service: Deactivated successfully. Mar 19 11:35:45.945152 systemd[1]: session-22.scope: Deactivated successfully. Mar 19 11:35:45.948926 systemd-logind[1931]: Removed session 22. Mar 19 11:35:45.969884 systemd[1]: Started sshd@22-172.31.31.152:22-139.178.68.195:53936.service - OpenSSH per-connection server daemon (139.178.68.195:53936). Mar 19 11:35:46.158218 sshd[4898]: Accepted publickey for core from 139.178.68.195 port 53936 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:46.161173 sshd-session[4898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:46.170473 systemd-logind[1931]: New session 23 of user core. Mar 19 11:35:46.177610 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 19 11:35:46.416264 sshd[4900]: Connection closed by 139.178.68.195 port 53936 Mar 19 11:35:46.417268 sshd-session[4898]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:46.423913 systemd[1]: sshd@22-172.31.31.152:22-139.178.68.195:53936.service: Deactivated successfully. Mar 19 11:35:46.429052 systemd[1]: session-23.scope: Deactivated successfully. Mar 19 11:35:46.432798 systemd-logind[1931]: Session 23 logged out. Waiting for processes to exit. Mar 19 11:35:46.434619 systemd-logind[1931]: Removed session 23. Mar 19 11:35:51.462871 systemd[1]: Started sshd@23-172.31.31.152:22-139.178.68.195:53938.service - OpenSSH per-connection server daemon (139.178.68.195:53938). Mar 19 11:35:51.654139 sshd[4912]: Accepted publickey for core from 139.178.68.195 port 53938 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:51.656932 sshd-session[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:51.666375 systemd-logind[1931]: New session 24 of user core. Mar 19 11:35:51.673593 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 19 11:35:51.914487 sshd[4914]: Connection closed by 139.178.68.195 port 53938 Mar 19 11:35:51.915319 sshd-session[4912]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:51.920578 systemd[1]: sshd@23-172.31.31.152:22-139.178.68.195:53938.service: Deactivated successfully. Mar 19 11:35:51.925089 systemd[1]: session-24.scope: Deactivated successfully. Mar 19 11:35:51.929077 systemd-logind[1931]: Session 24 logged out. Waiting for processes to exit. Mar 19 11:35:51.930981 systemd-logind[1931]: Removed session 24. Mar 19 11:35:56.962802 systemd[1]: Started sshd@24-172.31.31.152:22-139.178.68.195:59636.service - OpenSSH per-connection server daemon (139.178.68.195:59636). Mar 19 11:35:57.146091 sshd[4929]: Accepted publickey for core from 139.178.68.195 port 59636 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:57.148851 sshd-session[4929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:57.157651 systemd-logind[1931]: New session 25 of user core. Mar 19 11:35:57.165687 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 19 11:35:57.404400 sshd[4931]: Connection closed by 139.178.68.195 port 59636 Mar 19 11:35:57.405237 sshd-session[4929]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:57.411483 systemd[1]: sshd@24-172.31.31.152:22-139.178.68.195:59636.service: Deactivated successfully. Mar 19 11:35:57.411875 systemd-logind[1931]: Session 25 logged out. Waiting for processes to exit. Mar 19 11:35:57.415975 systemd[1]: session-25.scope: Deactivated successfully. Mar 19 11:35:57.421841 systemd-logind[1931]: Removed session 25. Mar 19 11:36:02.453810 systemd[1]: Started sshd@25-172.31.31.152:22-139.178.68.195:59646.service - OpenSSH per-connection server daemon (139.178.68.195:59646). Mar 19 11:36:02.631011 sshd[4945]: Accepted publickey for core from 139.178.68.195 port 59646 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:36:02.633467 sshd-session[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:02.642248 systemd-logind[1931]: New session 26 of user core. Mar 19 11:36:02.647609 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 19 11:36:02.889267 sshd[4947]: Connection closed by 139.178.68.195 port 59646 Mar 19 11:36:02.890161 sshd-session[4945]: pam_unix(sshd:session): session closed for user core Mar 19 11:36:02.895313 systemd-logind[1931]: Session 26 logged out. Waiting for processes to exit. Mar 19 11:36:02.896135 systemd[1]: sshd@25-172.31.31.152:22-139.178.68.195:59646.service: Deactivated successfully. Mar 19 11:36:02.901452 systemd[1]: session-26.scope: Deactivated successfully. Mar 19 11:36:02.905983 systemd-logind[1931]: Removed session 26. Mar 19 11:36:07.934828 systemd[1]: Started sshd@26-172.31.31.152:22-139.178.68.195:59688.service - OpenSSH per-connection server daemon (139.178.68.195:59688). Mar 19 11:36:08.118916 sshd[4959]: Accepted publickey for core from 139.178.68.195 port 59688 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:36:08.121379 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:08.130474 systemd-logind[1931]: New session 27 of user core. Mar 19 11:36:08.134909 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 19 11:36:08.374524 sshd[4961]: Connection closed by 139.178.68.195 port 59688 Mar 19 11:36:08.375524 sshd-session[4959]: pam_unix(sshd:session): session closed for user core Mar 19 11:36:08.380405 systemd-logind[1931]: Session 27 logged out. Waiting for processes to exit. Mar 19 11:36:08.381873 systemd[1]: sshd@26-172.31.31.152:22-139.178.68.195:59688.service: Deactivated successfully. Mar 19 11:36:08.385745 systemd[1]: session-27.scope: Deactivated successfully. Mar 19 11:36:08.389988 systemd-logind[1931]: Removed session 27. Mar 19 11:36:08.414882 systemd[1]: Started sshd@27-172.31.31.152:22-139.178.68.195:59690.service - OpenSSH per-connection server daemon (139.178.68.195:59690). Mar 19 11:36:08.608000 sshd[4973]: Accepted publickey for core from 139.178.68.195 port 59690 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:36:08.610175 sshd-session[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:08.620693 systemd-logind[1931]: New session 28 of user core. Mar 19 11:36:08.630608 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 19 11:36:10.502308 containerd[1956]: time="2025-03-19T11:36:10.501718295Z" level=info msg="StopContainer for \"d9c2c72e824e7bd5be999c4559393bdb181881f1e4725e9b63accab2105c8dbe\" with timeout 30 (s)" Mar 19 11:36:10.507374 containerd[1956]: time="2025-03-19T11:36:10.505481063Z" level=info msg="Stop container \"d9c2c72e824e7bd5be999c4559393bdb181881f1e4725e9b63accab2105c8dbe\" with signal terminated" Mar 19 11:36:10.534441 systemd[1]: cri-containerd-d9c2c72e824e7bd5be999c4559393bdb181881f1e4725e9b63accab2105c8dbe.scope: Deactivated successfully. Mar 19 11:36:10.580440 containerd[1956]: time="2025-03-19T11:36:10.580367460Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 11:36:10.598488 containerd[1956]: time="2025-03-19T11:36:10.598415172Z" level=info msg="StopContainer for \"e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc\" with timeout 2 (s)" Mar 19 11:36:10.599363 containerd[1956]: time="2025-03-19T11:36:10.599295144Z" level=info msg="Stop container \"e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc\" with signal terminated" Mar 19 11:36:10.603836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9c2c72e824e7bd5be999c4559393bdb181881f1e4725e9b63accab2105c8dbe-rootfs.mount: Deactivated successfully. Mar 19 11:36:10.622781 systemd-networkd[1848]: lxc_health: Link DOWN Mar 19 11:36:10.622795 systemd-networkd[1848]: lxc_health: Lost carrier Mar 19 11:36:10.633090 containerd[1956]: time="2025-03-19T11:36:10.632986056Z" level=info msg="shim disconnected" id=d9c2c72e824e7bd5be999c4559393bdb181881f1e4725e9b63accab2105c8dbe namespace=k8s.io Mar 19 11:36:10.633090 containerd[1956]: time="2025-03-19T11:36:10.633067596Z" level=warning msg="cleaning up after shim disconnected" id=d9c2c72e824e7bd5be999c4559393bdb181881f1e4725e9b63accab2105c8dbe namespace=k8s.io Mar 19 11:36:10.633090 containerd[1956]: time="2025-03-19T11:36:10.633088260Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:36:10.655006 systemd[1]: cri-containerd-e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc.scope: Deactivated successfully. Mar 19 11:36:10.655603 systemd[1]: cri-containerd-e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc.scope: Consumed 14.103s CPU time, 125.4M memory peak, 136K read from disk, 12.9M written to disk. Mar 19 11:36:10.676483 containerd[1956]: time="2025-03-19T11:36:10.676266084Z" level=info msg="StopContainer for \"d9c2c72e824e7bd5be999c4559393bdb181881f1e4725e9b63accab2105c8dbe\" returns successfully" Mar 19 11:36:10.677922 containerd[1956]: time="2025-03-19T11:36:10.677183400Z" level=info msg="StopPodSandbox for \"01ffe4621a1515764783d6b96c462118bf085b576cfb6961b62b4676c2616541\"" Mar 19 11:36:10.677922 containerd[1956]: time="2025-03-19T11:36:10.677301300Z" level=info msg="Container to stop \"d9c2c72e824e7bd5be999c4559393bdb181881f1e4725e9b63accab2105c8dbe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:36:10.682056 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01ffe4621a1515764783d6b96c462118bf085b576cfb6961b62b4676c2616541-shm.mount: Deactivated successfully. Mar 19 11:36:10.698146 systemd[1]: cri-containerd-01ffe4621a1515764783d6b96c462118bf085b576cfb6961b62b4676c2616541.scope: Deactivated successfully. Mar 19 11:36:10.718829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc-rootfs.mount: Deactivated successfully. Mar 19 11:36:10.730303 containerd[1956]: time="2025-03-19T11:36:10.730226569Z" level=info msg="shim disconnected" id=e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc namespace=k8s.io Mar 19 11:36:10.730947 containerd[1956]: time="2025-03-19T11:36:10.730900537Z" level=warning msg="cleaning up after shim disconnected" id=e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc namespace=k8s.io Mar 19 11:36:10.731175 containerd[1956]: time="2025-03-19T11:36:10.731097601Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:36:10.763607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01ffe4621a1515764783d6b96c462118bf085b576cfb6961b62b4676c2616541-rootfs.mount: Deactivated successfully. Mar 19 11:36:10.767540 containerd[1956]: time="2025-03-19T11:36:10.767159953Z" level=info msg="shim disconnected" id=01ffe4621a1515764783d6b96c462118bf085b576cfb6961b62b4676c2616541 namespace=k8s.io Mar 19 11:36:10.767540 containerd[1956]: time="2025-03-19T11:36:10.767235145Z" level=warning msg="cleaning up after shim disconnected" id=01ffe4621a1515764783d6b96c462118bf085b576cfb6961b62b4676c2616541 namespace=k8s.io Mar 19 11:36:10.767540 containerd[1956]: time="2025-03-19T11:36:10.767255485Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:36:10.775539 containerd[1956]: time="2025-03-19T11:36:10.775467781Z" level=info msg="StopContainer for \"e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc\" returns successfully" Mar 19 11:36:10.776846 containerd[1956]: time="2025-03-19T11:36:10.776527765Z" level=info msg="StopPodSandbox for \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\"" Mar 19 11:36:10.776846 containerd[1956]: time="2025-03-19T11:36:10.776586601Z" level=info msg="Container to stop \"1ee82c1a76ddfe1d268ddcc388a69e28dc5b288156a90bb04f0ab5c21bff5436\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:36:10.776846 containerd[1956]: time="2025-03-19T11:36:10.776614993Z" level=info msg="Container to stop \"e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:36:10.776846 containerd[1956]: time="2025-03-19T11:36:10.776638249Z" level=info msg="Container to stop \"c211e21bfc20983346f292f119d9d2af606aa6a8f25f6d684886d9ef5bb943cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:36:10.776846 containerd[1956]: time="2025-03-19T11:36:10.776658985Z" level=info msg="Container to stop \"f8dc80383846e6a966e83b0ba5996b2663c2ccb8822492733cb9388300f9cd6e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:36:10.776846 containerd[1956]: time="2025-03-19T11:36:10.776678965Z" level=info msg="Container to stop \"b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:36:10.792036 systemd[1]: cri-containerd-54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328.scope: Deactivated successfully. Mar 19 11:36:10.814046 containerd[1956]: time="2025-03-19T11:36:10.813822721Z" level=info msg="TearDown network for sandbox \"01ffe4621a1515764783d6b96c462118bf085b576cfb6961b62b4676c2616541\" successfully" Mar 19 11:36:10.814046 containerd[1956]: time="2025-03-19T11:36:10.813874921Z" level=info msg="StopPodSandbox for \"01ffe4621a1515764783d6b96c462118bf085b576cfb6961b62b4676c2616541\" returns successfully" Mar 19 11:36:10.858539 containerd[1956]: time="2025-03-19T11:36:10.858385597Z" level=info msg="shim disconnected" id=54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328 namespace=k8s.io Mar 19 11:36:10.858539 containerd[1956]: time="2025-03-19T11:36:10.858464377Z" level=warning msg="cleaning up after shim disconnected" id=54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328 namespace=k8s.io Mar 19 11:36:10.858539 containerd[1956]: time="2025-03-19T11:36:10.858484957Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:36:10.882183 containerd[1956]: time="2025-03-19T11:36:10.882041245Z" level=info msg="TearDown network for sandbox \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\" successfully" Mar 19 11:36:10.882183 containerd[1956]: time="2025-03-19T11:36:10.882087709Z" level=info msg="StopPodSandbox for \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\" returns successfully" Mar 19 11:36:10.934503 kubelet[3329]: I0319 11:36:10.934434 3329 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-clustermesh-secrets\") pod \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " Mar 19 11:36:10.935160 kubelet[3329]: I0319 11:36:10.934512 3329 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-cilium-config-path\") pod \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " Mar 19 11:36:10.935160 kubelet[3329]: I0319 11:36:10.934552 3329 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-cni-path\") pod \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " Mar 19 11:36:10.935160 kubelet[3329]: I0319 11:36:10.934586 3329 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-etc-cni-netd\") pod \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " Mar 19 11:36:10.935160 kubelet[3329]: I0319 11:36:10.934621 3329 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-host-proc-sys-kernel\") pod \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " Mar 19 11:36:10.935160 kubelet[3329]: I0319 11:36:10.934658 3329 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-hubble-tls\") pod \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " Mar 19 11:36:10.935160 kubelet[3329]: I0319 11:36:10.934691 3329 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-host-proc-sys-net\") pod \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " Mar 19 11:36:10.935573 kubelet[3329]: I0319 11:36:10.934723 3329 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-cilium-run\") pod \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " Mar 19 11:36:10.935573 kubelet[3329]: I0319 11:36:10.934754 3329 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-lib-modules\") pod \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " Mar 19 11:36:10.935573 kubelet[3329]: I0319 11:36:10.934785 3329 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-bpf-maps\") pod \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " Mar 19 11:36:10.935573 kubelet[3329]: I0319 11:36:10.934816 3329 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-xtables-lock\") pod \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " Mar 19 11:36:10.935573 kubelet[3329]: I0319 11:36:10.934855 3329 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/40f2813f-e7e7-4b9f-9738-d3d8fa99388a-cilium-config-path\") pod \"40f2813f-e7e7-4b9f-9738-d3d8fa99388a\" (UID: \"40f2813f-e7e7-4b9f-9738-d3d8fa99388a\") " Mar 19 11:36:10.935573 kubelet[3329]: I0319 11:36:10.934891 3329 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2v4l\" (UniqueName: \"kubernetes.io/projected/40f2813f-e7e7-4b9f-9738-d3d8fa99388a-kube-api-access-f2v4l\") pod \"40f2813f-e7e7-4b9f-9738-d3d8fa99388a\" (UID: \"40f2813f-e7e7-4b9f-9738-d3d8fa99388a\") " Mar 19 11:36:10.937515 kubelet[3329]: I0319 11:36:10.934924 3329 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-cilium-cgroup\") pod \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " Mar 19 11:36:10.937515 kubelet[3329]: I0319 11:36:10.934956 3329 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-hostproc\") pod \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " Mar 19 11:36:10.937515 kubelet[3329]: I0319 11:36:10.934995 3329 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn784\" (UniqueName: \"kubernetes.io/projected/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-kube-api-access-jn784\") pod \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\" (UID: \"d3886f5b-16a9-404c-a86e-8e60ef9ee59b\") " Mar 19 11:36:10.937515 kubelet[3329]: I0319 11:36:10.935432 3329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d3886f5b-16a9-404c-a86e-8e60ef9ee59b" (UID: "d3886f5b-16a9-404c-a86e-8e60ef9ee59b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:36:10.937515 kubelet[3329]: I0319 11:36:10.935857 3329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d3886f5b-16a9-404c-a86e-8e60ef9ee59b" (UID: "d3886f5b-16a9-404c-a86e-8e60ef9ee59b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:36:10.937794 kubelet[3329]: I0319 11:36:10.935906 3329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d3886f5b-16a9-404c-a86e-8e60ef9ee59b" (UID: "d3886f5b-16a9-404c-a86e-8e60ef9ee59b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:36:10.937794 kubelet[3329]: I0319 11:36:10.935946 3329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d3886f5b-16a9-404c-a86e-8e60ef9ee59b" (UID: "d3886f5b-16a9-404c-a86e-8e60ef9ee59b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:36:10.938782 kubelet[3329]: I0319 11:36:10.938711 3329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d3886f5b-16a9-404c-a86e-8e60ef9ee59b" (UID: "d3886f5b-16a9-404c-a86e-8e60ef9ee59b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:36:10.942552 kubelet[3329]: I0319 11:36:10.939040 3329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-hostproc" (OuterVolumeSpecName: "hostproc") pod "d3886f5b-16a9-404c-a86e-8e60ef9ee59b" (UID: "d3886f5b-16a9-404c-a86e-8e60ef9ee59b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:36:10.942722 kubelet[3329]: I0319 11:36:10.939065 3329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-cni-path" (OuterVolumeSpecName: "cni-path") pod "d3886f5b-16a9-404c-a86e-8e60ef9ee59b" (UID: "d3886f5b-16a9-404c-a86e-8e60ef9ee59b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:36:10.942848 kubelet[3329]: I0319 11:36:10.939093 3329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d3886f5b-16a9-404c-a86e-8e60ef9ee59b" (UID: "d3886f5b-16a9-404c-a86e-8e60ef9ee59b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:36:10.942955 kubelet[3329]: I0319 11:36:10.939117 3329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d3886f5b-16a9-404c-a86e-8e60ef9ee59b" (UID: "d3886f5b-16a9-404c-a86e-8e60ef9ee59b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:36:10.943073 kubelet[3329]: I0319 11:36:10.939497 3329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d3886f5b-16a9-404c-a86e-8e60ef9ee59b" (UID: "d3886f5b-16a9-404c-a86e-8e60ef9ee59b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 11:36:10.951295 kubelet[3329]: I0319 11:36:10.951237 3329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40f2813f-e7e7-4b9f-9738-d3d8fa99388a-kube-api-access-f2v4l" (OuterVolumeSpecName: "kube-api-access-f2v4l") pod "40f2813f-e7e7-4b9f-9738-d3d8fa99388a" (UID: "40f2813f-e7e7-4b9f-9738-d3d8fa99388a"). InnerVolumeSpecName "kube-api-access-f2v4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:36:10.951741 kubelet[3329]: I0319 11:36:10.951669 3329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40f2813f-e7e7-4b9f-9738-d3d8fa99388a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "40f2813f-e7e7-4b9f-9738-d3d8fa99388a" (UID: "40f2813f-e7e7-4b9f-9738-d3d8fa99388a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:36:10.951984 kubelet[3329]: I0319 11:36:10.951955 3329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d3886f5b-16a9-404c-a86e-8e60ef9ee59b" (UID: "d3886f5b-16a9-404c-a86e-8e60ef9ee59b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:36:10.952544 kubelet[3329]: I0319 11:36:10.952489 3329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d3886f5b-16a9-404c-a86e-8e60ef9ee59b" (UID: "d3886f5b-16a9-404c-a86e-8e60ef9ee59b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 11:36:10.953056 kubelet[3329]: I0319 11:36:10.953016 3329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-kube-api-access-jn784" (OuterVolumeSpecName: "kube-api-access-jn784") pod "d3886f5b-16a9-404c-a86e-8e60ef9ee59b" (UID: "d3886f5b-16a9-404c-a86e-8e60ef9ee59b"). InnerVolumeSpecName "kube-api-access-jn784". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 11:36:10.953999 kubelet[3329]: I0319 11:36:10.953938 3329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d3886f5b-16a9-404c-a86e-8e60ef9ee59b" (UID: "d3886f5b-16a9-404c-a86e-8e60ef9ee59b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 11:36:11.035881 kubelet[3329]: I0319 11:36:11.035728 3329 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-hostproc\") on node \"ip-172-31-31-152\" DevicePath \"\"" Mar 19 11:36:11.035881 kubelet[3329]: I0319 11:36:11.035779 3329 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jn784\" (UniqueName: \"kubernetes.io/projected/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-kube-api-access-jn784\") on node \"ip-172-31-31-152\" DevicePath \"\"" Mar 19 11:36:11.035881 kubelet[3329]: I0319 11:36:11.035813 3329 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/40f2813f-e7e7-4b9f-9738-d3d8fa99388a-cilium-config-path\") on node \"ip-172-31-31-152\" DevicePath \"\"" Mar 19 11:36:11.035881 kubelet[3329]: I0319 11:36:11.035835 3329 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-f2v4l\" (UniqueName: \"kubernetes.io/projected/40f2813f-e7e7-4b9f-9738-d3d8fa99388a-kube-api-access-f2v4l\") on node \"ip-172-31-31-152\" DevicePath \"\"" Mar 19 11:36:11.035881 kubelet[3329]: I0319 11:36:11.035856 3329 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-cilium-cgroup\") on node \"ip-172-31-31-152\" DevicePath \"\"" Mar 19 11:36:11.035881 kubelet[3329]: I0319 11:36:11.035878 3329 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-clustermesh-secrets\") on node \"ip-172-31-31-152\" DevicePath \"\"" Mar 19 11:36:11.036302 kubelet[3329]: I0319 11:36:11.035897 3329 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-cilium-config-path\") on node \"ip-172-31-31-152\" DevicePath \"\"" Mar 19 11:36:11.036302 kubelet[3329]: I0319 11:36:11.035916 3329 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-cni-path\") on node \"ip-172-31-31-152\" DevicePath \"\"" Mar 19 11:36:11.036302 kubelet[3329]: I0319 11:36:11.035936 3329 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-etc-cni-netd\") on node \"ip-172-31-31-152\" DevicePath \"\"" Mar 19 11:36:11.036302 kubelet[3329]: I0319 11:36:11.035957 3329 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-host-proc-sys-kernel\") on node \"ip-172-31-31-152\" DevicePath \"\"" Mar 19 11:36:11.036302 kubelet[3329]: I0319 11:36:11.035977 3329 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-hubble-tls\") on node \"ip-172-31-31-152\" DevicePath \"\"" Mar 19 11:36:11.036302 kubelet[3329]: I0319 11:36:11.035999 3329 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-lib-modules\") on node \"ip-172-31-31-152\" DevicePath \"\"" Mar 19 11:36:11.036302 kubelet[3329]: I0319 11:36:11.036020 3329 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-host-proc-sys-net\") on node \"ip-172-31-31-152\" DevicePath \"\"" Mar 19 11:36:11.036302 kubelet[3329]: I0319 11:36:11.036039 3329 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-cilium-run\") on node \"ip-172-31-31-152\" DevicePath \"\"" Mar 19 11:36:11.036745 kubelet[3329]: I0319 11:36:11.036058 3329 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-bpf-maps\") on node \"ip-172-31-31-152\" DevicePath \"\"" Mar 19 11:36:11.036745 kubelet[3329]: I0319 11:36:11.036079 3329 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3886f5b-16a9-404c-a86e-8e60ef9ee59b-xtables-lock\") on node \"ip-172-31-31-152\" DevicePath \"\"" Mar 19 11:36:11.300005 systemd[1]: Removed slice kubepods-burstable-podd3886f5b_16a9_404c_a86e_8e60ef9ee59b.slice - libcontainer container kubepods-burstable-podd3886f5b_16a9_404c_a86e_8e60ef9ee59b.slice. Mar 19 11:36:11.300243 systemd[1]: kubepods-burstable-podd3886f5b_16a9_404c_a86e_8e60ef9ee59b.slice: Consumed 14.262s CPU time, 125.8M memory peak, 136K read from disk, 12.9M written to disk. Mar 19 11:36:11.303600 systemd[1]: Removed slice kubepods-besteffort-pod40f2813f_e7e7_4b9f_9738_d3d8fa99388a.slice - libcontainer container kubepods-besteffort-pod40f2813f_e7e7_4b9f_9738_d3d8fa99388a.slice. Mar 19 11:36:11.529716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328-rootfs.mount: Deactivated successfully. Mar 19 11:36:11.529928 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328-shm.mount: Deactivated successfully. Mar 19 11:36:11.530147 systemd[1]: var-lib-kubelet-pods-40f2813f\x2de7e7\x2d4b9f\x2d9738\x2dd3d8fa99388a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df2v4l.mount: Deactivated successfully. Mar 19 11:36:11.530324 systemd[1]: var-lib-kubelet-pods-d3886f5b\x2d16a9\x2d404c\x2da86e\x2d8e60ef9ee59b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djn784.mount: Deactivated successfully. Mar 19 11:36:11.530559 systemd[1]: var-lib-kubelet-pods-d3886f5b\x2d16a9\x2d404c\x2da86e\x2d8e60ef9ee59b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 19 11:36:11.530700 systemd[1]: var-lib-kubelet-pods-d3886f5b\x2d16a9\x2d404c\x2da86e\x2d8e60ef9ee59b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 19 11:36:11.752284 kubelet[3329]: I0319 11:36:11.751446 3329 scope.go:117] "RemoveContainer" containerID="d9c2c72e824e7bd5be999c4559393bdb181881f1e4725e9b63accab2105c8dbe" Mar 19 11:36:11.757405 containerd[1956]: time="2025-03-19T11:36:11.757321130Z" level=info msg="RemoveContainer for \"d9c2c72e824e7bd5be999c4559393bdb181881f1e4725e9b63accab2105c8dbe\"" Mar 19 11:36:11.772259 containerd[1956]: time="2025-03-19T11:36:11.771951326Z" level=info msg="RemoveContainer for \"d9c2c72e824e7bd5be999c4559393bdb181881f1e4725e9b63accab2105c8dbe\" returns successfully" Mar 19 11:36:11.772823 kubelet[3329]: I0319 11:36:11.772775 3329 scope.go:117] "RemoveContainer" containerID="e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc" Mar 19 11:36:11.777747 containerd[1956]: time="2025-03-19T11:36:11.776906642Z" level=info msg="RemoveContainer for \"e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc\"" Mar 19 11:36:11.787035 containerd[1956]: time="2025-03-19T11:36:11.786879614Z" level=info msg="RemoveContainer for \"e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc\" returns successfully" Mar 19 11:36:11.788297 kubelet[3329]: I0319 11:36:11.788245 3329 scope.go:117] "RemoveContainer" containerID="b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997" Mar 19 11:36:11.791671 containerd[1956]: time="2025-03-19T11:36:11.791177246Z" level=info msg="RemoveContainer for \"b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997\"" Mar 19 11:36:11.798243 containerd[1956]: time="2025-03-19T11:36:11.798156002Z" level=info msg="RemoveContainer for \"b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997\" returns successfully" Mar 19 11:36:11.799083 kubelet[3329]: I0319 11:36:11.798739 3329 scope.go:117] "RemoveContainer" containerID="f8dc80383846e6a966e83b0ba5996b2663c2ccb8822492733cb9388300f9cd6e" Mar 19 11:36:11.803531 containerd[1956]: time="2025-03-19T11:36:11.801736334Z" level=info msg="RemoveContainer for \"f8dc80383846e6a966e83b0ba5996b2663c2ccb8822492733cb9388300f9cd6e\"" Mar 19 11:36:11.812257 containerd[1956]: time="2025-03-19T11:36:11.812187746Z" level=info msg="RemoveContainer for \"f8dc80383846e6a966e83b0ba5996b2663c2ccb8822492733cb9388300f9cd6e\" returns successfully" Mar 19 11:36:11.815690 kubelet[3329]: I0319 11:36:11.815536 3329 scope.go:117] "RemoveContainer" containerID="c211e21bfc20983346f292f119d9d2af606aa6a8f25f6d684886d9ef5bb943cf" Mar 19 11:36:11.821010 containerd[1956]: time="2025-03-19T11:36:11.820460510Z" level=info msg="RemoveContainer for \"c211e21bfc20983346f292f119d9d2af606aa6a8f25f6d684886d9ef5bb943cf\"" Mar 19 11:36:11.831219 containerd[1956]: time="2025-03-19T11:36:11.831137234Z" level=info msg="RemoveContainer for \"c211e21bfc20983346f292f119d9d2af606aa6a8f25f6d684886d9ef5bb943cf\" returns successfully" Mar 19 11:36:11.831990 kubelet[3329]: I0319 11:36:11.831842 3329 scope.go:117] "RemoveContainer" containerID="1ee82c1a76ddfe1d268ddcc388a69e28dc5b288156a90bb04f0ab5c21bff5436" Mar 19 11:36:11.836236 containerd[1956]: time="2025-03-19T11:36:11.835807262Z" level=info msg="RemoveContainer for \"1ee82c1a76ddfe1d268ddcc388a69e28dc5b288156a90bb04f0ab5c21bff5436\"" Mar 19 11:36:11.844820 containerd[1956]: time="2025-03-19T11:36:11.844769726Z" level=info msg="RemoveContainer for \"1ee82c1a76ddfe1d268ddcc388a69e28dc5b288156a90bb04f0ab5c21bff5436\" returns successfully" Mar 19 11:36:11.845391 kubelet[3329]: I0319 11:36:11.845330 3329 scope.go:117] "RemoveContainer" containerID="e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc" Mar 19 11:36:11.845795 containerd[1956]: time="2025-03-19T11:36:11.845723882Z" level=error msg="ContainerStatus for \"e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc\": not found" Mar 19 11:36:11.846099 kubelet[3329]: E0319 11:36:11.846061 3329 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc\": not found" containerID="e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc" Mar 19 11:36:11.846288 kubelet[3329]: I0319 11:36:11.846135 3329 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc"} err="failed to get container status \"e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"e79be11f2154b0f18f12a115462b0fee04c6a7b49795bae442ca3a7a8050e4dc\": not found" Mar 19 11:36:11.846427 kubelet[3329]: I0319 11:36:11.846400 3329 scope.go:117] "RemoveContainer" containerID="b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997" Mar 19 11:36:11.847101 containerd[1956]: time="2025-03-19T11:36:11.847044830Z" level=error msg="ContainerStatus for \"b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997\": not found" Mar 19 11:36:11.847613 kubelet[3329]: E0319 11:36:11.847391 3329 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997\": not found" containerID="b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997" Mar 19 11:36:11.847613 kubelet[3329]: I0319 11:36:11.847446 3329 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997"} err="failed to get container status \"b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997\": rpc error: code = NotFound desc = an error occurred when try to find container \"b962b5bfb17d588e3c45e343ef30aa33210deae84a9a91c1827a3b91f0475997\": not found" Mar 19 11:36:11.847613 kubelet[3329]: I0319 11:36:11.847481 3329 scope.go:117] "RemoveContainer" containerID="f8dc80383846e6a966e83b0ba5996b2663c2ccb8822492733cb9388300f9cd6e" Mar 19 11:36:11.847912 containerd[1956]: time="2025-03-19T11:36:11.847848002Z" level=error msg="ContainerStatus for \"f8dc80383846e6a966e83b0ba5996b2663c2ccb8822492733cb9388300f9cd6e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8dc80383846e6a966e83b0ba5996b2663c2ccb8822492733cb9388300f9cd6e\": not found" Mar 19 11:36:11.848148 kubelet[3329]: E0319 11:36:11.848090 3329 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8dc80383846e6a966e83b0ba5996b2663c2ccb8822492733cb9388300f9cd6e\": not found" containerID="f8dc80383846e6a966e83b0ba5996b2663c2ccb8822492733cb9388300f9cd6e" Mar 19 11:36:11.848228 kubelet[3329]: I0319 11:36:11.848153 3329 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8dc80383846e6a966e83b0ba5996b2663c2ccb8822492733cb9388300f9cd6e"} err="failed to get container status \"f8dc80383846e6a966e83b0ba5996b2663c2ccb8822492733cb9388300f9cd6e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8dc80383846e6a966e83b0ba5996b2663c2ccb8822492733cb9388300f9cd6e\": not found" Mar 19 11:36:11.848228 kubelet[3329]: I0319 11:36:11.848186 3329 scope.go:117] "RemoveContainer" containerID="c211e21bfc20983346f292f119d9d2af606aa6a8f25f6d684886d9ef5bb943cf" Mar 19 11:36:11.848733 containerd[1956]: time="2025-03-19T11:36:11.848607926Z" level=error msg="ContainerStatus for \"c211e21bfc20983346f292f119d9d2af606aa6a8f25f6d684886d9ef5bb943cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c211e21bfc20983346f292f119d9d2af606aa6a8f25f6d684886d9ef5bb943cf\": not found" Mar 19 11:36:11.848907 kubelet[3329]: E0319 11:36:11.848865 3329 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c211e21bfc20983346f292f119d9d2af606aa6a8f25f6d684886d9ef5bb943cf\": not found" containerID="c211e21bfc20983346f292f119d9d2af606aa6a8f25f6d684886d9ef5bb943cf" Mar 19 11:36:11.848990 kubelet[3329]: I0319 11:36:11.848917 3329 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c211e21bfc20983346f292f119d9d2af606aa6a8f25f6d684886d9ef5bb943cf"} err="failed to get container status \"c211e21bfc20983346f292f119d9d2af606aa6a8f25f6d684886d9ef5bb943cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"c211e21bfc20983346f292f119d9d2af606aa6a8f25f6d684886d9ef5bb943cf\": not found" Mar 19 11:36:11.848990 kubelet[3329]: I0319 11:36:11.848950 3329 scope.go:117] "RemoveContainer" containerID="1ee82c1a76ddfe1d268ddcc388a69e28dc5b288156a90bb04f0ab5c21bff5436" Mar 19 11:36:11.849543 containerd[1956]: time="2025-03-19T11:36:11.849455522Z" level=error msg="ContainerStatus for \"1ee82c1a76ddfe1d268ddcc388a69e28dc5b288156a90bb04f0ab5c21bff5436\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ee82c1a76ddfe1d268ddcc388a69e28dc5b288156a90bb04f0ab5c21bff5436\": not found" Mar 19 11:36:11.849870 kubelet[3329]: E0319 11:36:11.849799 3329 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ee82c1a76ddfe1d268ddcc388a69e28dc5b288156a90bb04f0ab5c21bff5436\": not found" containerID="1ee82c1a76ddfe1d268ddcc388a69e28dc5b288156a90bb04f0ab5c21bff5436" Mar 19 11:36:11.849938 kubelet[3329]: I0319 11:36:11.849881 3329 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ee82c1a76ddfe1d268ddcc388a69e28dc5b288156a90bb04f0ab5c21bff5436"} err="failed to get container status \"1ee82c1a76ddfe1d268ddcc388a69e28dc5b288156a90bb04f0ab5c21bff5436\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ee82c1a76ddfe1d268ddcc388a69e28dc5b288156a90bb04f0ab5c21bff5436\": not found" Mar 19 11:36:12.442557 sshd[4975]: Connection closed by 139.178.68.195 port 59690 Mar 19 11:36:12.443475 sshd-session[4973]: pam_unix(sshd:session): session closed for user core Mar 19 11:36:12.450841 systemd[1]: sshd@27-172.31.31.152:22-139.178.68.195:59690.service: Deactivated successfully. Mar 19 11:36:12.454752 systemd[1]: session-28.scope: Deactivated successfully. Mar 19 11:36:12.455383 systemd[1]: session-28.scope: Consumed 1.128s CPU time, 21.4M memory peak. Mar 19 11:36:12.456315 systemd-logind[1931]: Session 28 logged out. Waiting for processes to exit. Mar 19 11:36:12.459417 systemd-logind[1931]: Removed session 28. Mar 19 11:36:12.483874 systemd[1]: Started sshd@28-172.31.31.152:22-139.178.68.195:59696.service - OpenSSH per-connection server daemon (139.178.68.195:59696). Mar 19 11:36:12.669801 sshd[5137]: Accepted publickey for core from 139.178.68.195 port 59696 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:36:12.672179 sshd-session[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:12.681065 systemd-logind[1931]: New session 29 of user core. Mar 19 11:36:12.686593 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 19 11:36:12.706548 ntpd[1924]: Deleting interface #12 lxc_health, fe80::d0c8:c1ff:fe43:1271%8#123, interface stats: received=0, sent=0, dropped=0, active_time=78 secs Mar 19 11:36:12.707099 ntpd[1924]: 19 Mar 11:36:12 ntpd[1924]: Deleting interface #12 lxc_health, fe80::d0c8:c1ff:fe43:1271%8#123, interface stats: received=0, sent=0, dropped=0, active_time=78 secs Mar 19 11:36:13.291519 kubelet[3329]: I0319 11:36:13.291458 3329 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40f2813f-e7e7-4b9f-9738-d3d8fa99388a" path="/var/lib/kubelet/pods/40f2813f-e7e7-4b9f-9738-d3d8fa99388a/volumes" Mar 19 11:36:13.292776 kubelet[3329]: I0319 11:36:13.292716 3329 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3886f5b-16a9-404c-a86e-8e60ef9ee59b" path="/var/lib/kubelet/pods/d3886f5b-16a9-404c-a86e-8e60ef9ee59b/volumes" Mar 19 11:36:13.534992 kubelet[3329]: E0319 11:36:13.534937 3329 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 19 11:36:14.826927 sshd[5139]: Connection closed by 139.178.68.195 port 59696 Mar 19 11:36:14.826045 sshd-session[5137]: pam_unix(sshd:session): session closed for user core Mar 19 11:36:14.836568 systemd[1]: session-29.scope: Deactivated successfully. Mar 19 11:36:14.837486 systemd[1]: session-29.scope: Consumed 1.929s CPU time, 23.7M memory peak. Mar 19 11:36:14.839828 systemd[1]: sshd@28-172.31.31.152:22-139.178.68.195:59696.service: Deactivated successfully. Mar 19 11:36:14.851802 systemd-logind[1931]: Session 29 logged out. Waiting for processes to exit. Mar 19 11:36:14.863391 kubelet[3329]: E0319 11:36:14.863307 3329 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d3886f5b-16a9-404c-a86e-8e60ef9ee59b" containerName="mount-cgroup" Mar 19 11:36:14.865447 kubelet[3329]: E0319 11:36:14.864266 3329 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d3886f5b-16a9-404c-a86e-8e60ef9ee59b" containerName="mount-bpf-fs" Mar 19 11:36:14.865447 kubelet[3329]: E0319 11:36:14.864299 3329 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d3886f5b-16a9-404c-a86e-8e60ef9ee59b" containerName="apply-sysctl-overwrites" Mar 19 11:36:14.865447 kubelet[3329]: E0319 11:36:14.864315 3329 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="40f2813f-e7e7-4b9f-9738-d3d8fa99388a" containerName="cilium-operator" Mar 19 11:36:14.865447 kubelet[3329]: E0319 11:36:14.864330 3329 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d3886f5b-16a9-404c-a86e-8e60ef9ee59b" containerName="clean-cilium-state" Mar 19 11:36:14.865447 kubelet[3329]: E0319 11:36:14.864396 3329 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d3886f5b-16a9-404c-a86e-8e60ef9ee59b" containerName="cilium-agent" Mar 19 11:36:14.865447 kubelet[3329]: I0319 11:36:14.864465 3329 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3886f5b-16a9-404c-a86e-8e60ef9ee59b" containerName="cilium-agent" Mar 19 11:36:14.865447 kubelet[3329]: I0319 11:36:14.864483 3329 memory_manager.go:354] "RemoveStaleState removing state" podUID="40f2813f-e7e7-4b9f-9738-d3d8fa99388a" containerName="cilium-operator" Mar 19 11:36:14.880267 systemd-logind[1931]: Removed session 29. Mar 19 11:36:14.894791 systemd[1]: Started sshd@29-172.31.31.152:22-139.178.68.195:59706.service - OpenSSH per-connection server daemon (139.178.68.195:59706). Mar 19 11:36:14.901444 kubelet[3329]: W0319 11:36:14.901363 3329 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-31-152" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-152' and this object Mar 19 11:36:14.901444 kubelet[3329]: E0319 11:36:14.901427 3329 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ip-172-31-31-152\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-152' and this object" logger="UnhandledError" Mar 19 11:36:14.901444 kubelet[3329]: W0319 11:36:14.901374 3329 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-31-152" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-152' and this object Mar 19 11:36:14.901771 kubelet[3329]: E0319 11:36:14.901478 3329 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-31-152\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-152' and this object" logger="UnhandledError" Mar 19 11:36:14.904408 kubelet[3329]: W0319 11:36:14.903599 3329 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-31-152" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-152' and this object Mar 19 11:36:14.904408 kubelet[3329]: E0319 11:36:14.903655 3329 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-31-152\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-152' and this object" logger="UnhandledError" Mar 19 11:36:14.904408 kubelet[3329]: W0319 11:36:14.903734 3329 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-31-152" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-152' and this object Mar 19 11:36:14.904408 kubelet[3329]: E0319 11:36:14.903761 3329 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-31-152\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-152' and this object" logger="UnhandledError" Mar 19 11:36:14.919032 systemd[1]: Created slice kubepods-burstable-pod680e3cb8_0a46_423d_8056_22083e8fe75f.slice - libcontainer container kubepods-burstable-pod680e3cb8_0a46_423d_8056_22083e8fe75f.slice. Mar 19 11:36:14.959947 kubelet[3329]: I0319 11:36:14.959871 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/680e3cb8-0a46-423d-8056-22083e8fe75f-xtables-lock\") pod \"cilium-4k8st\" (UID: \"680e3cb8-0a46-423d-8056-22083e8fe75f\") " pod="kube-system/cilium-4k8st" Mar 19 11:36:14.961998 kubelet[3329]: I0319 11:36:14.961952 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/680e3cb8-0a46-423d-8056-22083e8fe75f-clustermesh-secrets\") pod \"cilium-4k8st\" (UID: \"680e3cb8-0a46-423d-8056-22083e8fe75f\") " pod="kube-system/cilium-4k8st" Mar 19 11:36:14.962438 kubelet[3329]: I0319 11:36:14.962376 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/680e3cb8-0a46-423d-8056-22083e8fe75f-bpf-maps\") pod \"cilium-4k8st\" (UID: \"680e3cb8-0a46-423d-8056-22083e8fe75f\") " pod="kube-system/cilium-4k8st" Mar 19 11:36:14.962791 kubelet[3329]: I0319 11:36:14.962603 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/680e3cb8-0a46-423d-8056-22083e8fe75f-host-proc-sys-net\") pod \"cilium-4k8st\" (UID: \"680e3cb8-0a46-423d-8056-22083e8fe75f\") " pod="kube-system/cilium-4k8st" Mar 19 11:36:14.962791 kubelet[3329]: I0319 11:36:14.962726 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/680e3cb8-0a46-423d-8056-22083e8fe75f-hostproc\") pod \"cilium-4k8st\" (UID: \"680e3cb8-0a46-423d-8056-22083e8fe75f\") " pod="kube-system/cilium-4k8st" Mar 19 11:36:14.965159 kubelet[3329]: I0319 11:36:14.963308 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/680e3cb8-0a46-423d-8056-22083e8fe75f-cilium-run\") pod \"cilium-4k8st\" (UID: \"680e3cb8-0a46-423d-8056-22083e8fe75f\") " pod="kube-system/cilium-4k8st" Mar 19 11:36:14.965159 kubelet[3329]: I0319 11:36:14.965216 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/680e3cb8-0a46-423d-8056-22083e8fe75f-cni-path\") pod \"cilium-4k8st\" (UID: \"680e3cb8-0a46-423d-8056-22083e8fe75f\") " pod="kube-system/cilium-4k8st" Mar 19 11:36:14.966162 kubelet[3329]: I0319 11:36:14.965563 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/680e3cb8-0a46-423d-8056-22083e8fe75f-lib-modules\") pod \"cilium-4k8st\" (UID: \"680e3cb8-0a46-423d-8056-22083e8fe75f\") " pod="kube-system/cilium-4k8st" Mar 19 11:36:14.966162 kubelet[3329]: I0319 11:36:14.965639 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/680e3cb8-0a46-423d-8056-22083e8fe75f-cilium-ipsec-secrets\") pod \"cilium-4k8st\" (UID: \"680e3cb8-0a46-423d-8056-22083e8fe75f\") " pod="kube-system/cilium-4k8st" Mar 19 11:36:14.966162 kubelet[3329]: I0319 11:36:14.965683 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/680e3cb8-0a46-423d-8056-22083e8fe75f-hubble-tls\") pod \"cilium-4k8st\" (UID: \"680e3cb8-0a46-423d-8056-22083e8fe75f\") " pod="kube-system/cilium-4k8st" Mar 19 11:36:14.966162 kubelet[3329]: I0319 11:36:14.965735 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/680e3cb8-0a46-423d-8056-22083e8fe75f-host-proc-sys-kernel\") pod \"cilium-4k8st\" (UID: \"680e3cb8-0a46-423d-8056-22083e8fe75f\") " pod="kube-system/cilium-4k8st" Mar 19 11:36:14.966162 kubelet[3329]: I0319 11:36:14.965772 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h5hf\" (UniqueName: \"kubernetes.io/projected/680e3cb8-0a46-423d-8056-22083e8fe75f-kube-api-access-7h5hf\") pod \"cilium-4k8st\" (UID: \"680e3cb8-0a46-423d-8056-22083e8fe75f\") " pod="kube-system/cilium-4k8st" Mar 19 11:36:14.966546 kubelet[3329]: I0319 11:36:14.965812 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/680e3cb8-0a46-423d-8056-22083e8fe75f-cilium-cgroup\") pod \"cilium-4k8st\" (UID: \"680e3cb8-0a46-423d-8056-22083e8fe75f\") " pod="kube-system/cilium-4k8st" Mar 19 11:36:14.966546 kubelet[3329]: I0319 11:36:14.965848 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/680e3cb8-0a46-423d-8056-22083e8fe75f-cilium-config-path\") pod \"cilium-4k8st\" (UID: \"680e3cb8-0a46-423d-8056-22083e8fe75f\") " pod="kube-system/cilium-4k8st" Mar 19 11:36:14.966546 kubelet[3329]: I0319 11:36:14.966017 3329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/680e3cb8-0a46-423d-8056-22083e8fe75f-etc-cni-netd\") pod \"cilium-4k8st\" (UID: \"680e3cb8-0a46-423d-8056-22083e8fe75f\") " pod="kube-system/cilium-4k8st" Mar 19 11:36:15.104180 sshd[5150]: Accepted publickey for core from 139.178.68.195 port 59706 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:36:15.107323 sshd-session[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:15.118679 systemd-logind[1931]: New session 30 of user core. Mar 19 11:36:15.133625 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 19 11:36:15.253491 sshd[5154]: Connection closed by 139.178.68.195 port 59706 Mar 19 11:36:15.254446 sshd-session[5150]: pam_unix(sshd:session): session closed for user core Mar 19 11:36:15.261993 systemd[1]: sshd@29-172.31.31.152:22-139.178.68.195:59706.service: Deactivated successfully. Mar 19 11:36:15.267454 systemd[1]: session-30.scope: Deactivated successfully. Mar 19 11:36:15.269953 systemd-logind[1931]: Session 30 logged out. Waiting for processes to exit. Mar 19 11:36:15.271618 systemd-logind[1931]: Removed session 30. Mar 19 11:36:15.293914 systemd[1]: Started sshd@30-172.31.31.152:22-139.178.68.195:59720.service - OpenSSH per-connection server daemon (139.178.68.195:59720). Mar 19 11:36:15.484312 sshd[5161]: Accepted publickey for core from 139.178.68.195 port 59720 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:36:15.486712 sshd-session[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:15.495891 systemd-logind[1931]: New session 31 of user core. Mar 19 11:36:15.503584 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 19 11:36:16.068149 kubelet[3329]: E0319 11:36:16.067899 3329 secret.go:188] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Mar 19 11:36:16.068149 kubelet[3329]: E0319 11:36:16.067930 3329 secret.go:188] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Mar 19 11:36:16.068149 kubelet[3329]: E0319 11:36:16.067908 3329 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Mar 19 11:36:16.068149 kubelet[3329]: E0319 11:36:16.068004 3329 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-4k8st: failed to sync secret cache: timed out waiting for the condition Mar 19 11:36:16.068149 kubelet[3329]: E0319 11:36:16.068020 3329 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/680e3cb8-0a46-423d-8056-22083e8fe75f-clustermesh-secrets podName:680e3cb8-0a46-423d-8056-22083e8fe75f nodeName:}" failed. No retries permitted until 2025-03-19 11:36:16.567993851 +0000 UTC m=+113.516014242 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/680e3cb8-0a46-423d-8056-22083e8fe75f-clustermesh-secrets") pod "cilium-4k8st" (UID: "680e3cb8-0a46-423d-8056-22083e8fe75f") : failed to sync secret cache: timed out waiting for the condition Mar 19 11:36:16.068149 kubelet[3329]: E0319 11:36:16.068053 3329 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/680e3cb8-0a46-423d-8056-22083e8fe75f-hubble-tls podName:680e3cb8-0a46-423d-8056-22083e8fe75f nodeName:}" failed. No retries permitted until 2025-03-19 11:36:16.568036439 +0000 UTC m=+113.516056818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/680e3cb8-0a46-423d-8056-22083e8fe75f-hubble-tls") pod "cilium-4k8st" (UID: "680e3cb8-0a46-423d-8056-22083e8fe75f") : failed to sync secret cache: timed out waiting for the condition Mar 19 11:36:16.070096 kubelet[3329]: E0319 11:36:16.068080 3329 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/680e3cb8-0a46-423d-8056-22083e8fe75f-cilium-ipsec-secrets podName:680e3cb8-0a46-423d-8056-22083e8fe75f nodeName:}" failed. No retries permitted until 2025-03-19 11:36:16.568065047 +0000 UTC m=+113.516085426 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/680e3cb8-0a46-423d-8056-22083e8fe75f-cilium-ipsec-secrets") pod "cilium-4k8st" (UID: "680e3cb8-0a46-423d-8056-22083e8fe75f") : failed to sync secret cache: timed out waiting for the condition Mar 19 11:36:16.205400 kubelet[3329]: I0319 11:36:16.204757 3329 setters.go:600] "Node became not ready" node="ip-172-31-31-152" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-19T11:36:16Z","lastTransitionTime":"2025-03-19T11:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 19 11:36:16.736717 containerd[1956]: time="2025-03-19T11:36:16.736217190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4k8st,Uid:680e3cb8-0a46-423d-8056-22083e8fe75f,Namespace:kube-system,Attempt:0,}" Mar 19 11:36:16.788050 containerd[1956]: time="2025-03-19T11:36:16.786767479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:36:16.788050 containerd[1956]: time="2025-03-19T11:36:16.787720483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:36:16.788050 containerd[1956]: time="2025-03-19T11:36:16.787749187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:36:16.788050 containerd[1956]: time="2025-03-19T11:36:16.787902883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:36:16.825675 systemd[1]: Started cri-containerd-e2ccb628ddb6360a8ed4aa8943a289dc93ff3615a3c38147b1595ca5e9346929.scope - libcontainer container e2ccb628ddb6360a8ed4aa8943a289dc93ff3615a3c38147b1595ca5e9346929. Mar 19 11:36:16.866082 containerd[1956]: time="2025-03-19T11:36:16.866002267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4k8st,Uid:680e3cb8-0a46-423d-8056-22083e8fe75f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2ccb628ddb6360a8ed4aa8943a289dc93ff3615a3c38147b1595ca5e9346929\"" Mar 19 11:36:16.872664 containerd[1956]: time="2025-03-19T11:36:16.872599723Z" level=info msg="CreateContainer within sandbox \"e2ccb628ddb6360a8ed4aa8943a289dc93ff3615a3c38147b1595ca5e9346929\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 19 11:36:16.894567 containerd[1956]: time="2025-03-19T11:36:16.894471487Z" level=info msg="CreateContainer within sandbox \"e2ccb628ddb6360a8ed4aa8943a289dc93ff3615a3c38147b1595ca5e9346929\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"538572e46a08d46699f5b12b07784cafddab6d1c231800a92bf803ec2c2214d0\"" Mar 19 11:36:16.897665 containerd[1956]: time="2025-03-19T11:36:16.896484847Z" level=info msg="StartContainer for \"538572e46a08d46699f5b12b07784cafddab6d1c231800a92bf803ec2c2214d0\"" Mar 19 11:36:16.939648 systemd[1]: Started cri-containerd-538572e46a08d46699f5b12b07784cafddab6d1c231800a92bf803ec2c2214d0.scope - libcontainer container 538572e46a08d46699f5b12b07784cafddab6d1c231800a92bf803ec2c2214d0. Mar 19 11:36:16.985019 containerd[1956]: time="2025-03-19T11:36:16.984950132Z" level=info msg="StartContainer for \"538572e46a08d46699f5b12b07784cafddab6d1c231800a92bf803ec2c2214d0\" returns successfully" Mar 19 11:36:17.001393 systemd[1]: cri-containerd-538572e46a08d46699f5b12b07784cafddab6d1c231800a92bf803ec2c2214d0.scope: Deactivated successfully. Mar 19 11:36:17.056965 containerd[1956]: time="2025-03-19T11:36:17.056889052Z" level=info msg="shim disconnected" id=538572e46a08d46699f5b12b07784cafddab6d1c231800a92bf803ec2c2214d0 namespace=k8s.io Mar 19 11:36:17.057499 containerd[1956]: time="2025-03-19T11:36:17.057255988Z" level=warning msg="cleaning up after shim disconnected" id=538572e46a08d46699f5b12b07784cafddab6d1c231800a92bf803ec2c2214d0 namespace=k8s.io Mar 19 11:36:17.057499 containerd[1956]: time="2025-03-19T11:36:17.057284296Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:36:17.795749 containerd[1956]: time="2025-03-19T11:36:17.795682892Z" level=info msg="CreateContainer within sandbox \"e2ccb628ddb6360a8ed4aa8943a289dc93ff3615a3c38147b1595ca5e9346929\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 19 11:36:17.826920 containerd[1956]: time="2025-03-19T11:36:17.826859936Z" level=info msg="CreateContainer within sandbox \"e2ccb628ddb6360a8ed4aa8943a289dc93ff3615a3c38147b1595ca5e9346929\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"32eee81de44d51ec20eb6638c3be5cd8f10c1b563f1aebdefc9c877f723224cb\"" Mar 19 11:36:17.828213 containerd[1956]: time="2025-03-19T11:36:17.828132944Z" level=info msg="StartContainer for \"32eee81de44d51ec20eb6638c3be5cd8f10c1b563f1aebdefc9c877f723224cb\"" Mar 19 11:36:17.885675 systemd[1]: Started cri-containerd-32eee81de44d51ec20eb6638c3be5cd8f10c1b563f1aebdefc9c877f723224cb.scope - libcontainer container 32eee81de44d51ec20eb6638c3be5cd8f10c1b563f1aebdefc9c877f723224cb. Mar 19 11:36:17.934067 containerd[1956]: time="2025-03-19T11:36:17.933983132Z" level=info msg="StartContainer for \"32eee81de44d51ec20eb6638c3be5cd8f10c1b563f1aebdefc9c877f723224cb\" returns successfully" Mar 19 11:36:17.946073 systemd[1]: cri-containerd-32eee81de44d51ec20eb6638c3be5cd8f10c1b563f1aebdefc9c877f723224cb.scope: Deactivated successfully. Mar 19 11:36:17.989032 containerd[1956]: time="2025-03-19T11:36:17.988958289Z" level=info msg="shim disconnected" id=32eee81de44d51ec20eb6638c3be5cd8f10c1b563f1aebdefc9c877f723224cb namespace=k8s.io Mar 19 11:36:17.989439 containerd[1956]: time="2025-03-19T11:36:17.989406585Z" level=warning msg="cleaning up after shim disconnected" id=32eee81de44d51ec20eb6638c3be5cd8f10c1b563f1aebdefc9c877f723224cb namespace=k8s.io Mar 19 11:36:17.989557 containerd[1956]: time="2025-03-19T11:36:17.989530641Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:36:18.538688 kubelet[3329]: E0319 11:36:18.537403 3329 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 19 11:36:18.586697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32eee81de44d51ec20eb6638c3be5cd8f10c1b563f1aebdefc9c877f723224cb-rootfs.mount: Deactivated successfully. Mar 19 11:36:18.805213 containerd[1956]: time="2025-03-19T11:36:18.805071585Z" level=info msg="CreateContainer within sandbox \"e2ccb628ddb6360a8ed4aa8943a289dc93ff3615a3c38147b1595ca5e9346929\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 19 11:36:18.841051 containerd[1956]: time="2025-03-19T11:36:18.840905505Z" level=info msg="CreateContainer within sandbox \"e2ccb628ddb6360a8ed4aa8943a289dc93ff3615a3c38147b1595ca5e9346929\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a23910536abf14a3900bc45193f5f84690b321de3bd686058e2bc1dc9d3a79ee\"" Mar 19 11:36:18.842267 containerd[1956]: time="2025-03-19T11:36:18.842136957Z" level=info msg="StartContainer for \"a23910536abf14a3900bc45193f5f84690b321de3bd686058e2bc1dc9d3a79ee\"" Mar 19 11:36:18.901652 systemd[1]: Started cri-containerd-a23910536abf14a3900bc45193f5f84690b321de3bd686058e2bc1dc9d3a79ee.scope - libcontainer container a23910536abf14a3900bc45193f5f84690b321de3bd686058e2bc1dc9d3a79ee. Mar 19 11:36:18.958548 containerd[1956]: time="2025-03-19T11:36:18.958480485Z" level=info msg="StartContainer for \"a23910536abf14a3900bc45193f5f84690b321de3bd686058e2bc1dc9d3a79ee\" returns successfully" Mar 19 11:36:18.961585 systemd[1]: cri-containerd-a23910536abf14a3900bc45193f5f84690b321de3bd686058e2bc1dc9d3a79ee.scope: Deactivated successfully. Mar 19 11:36:19.019153 containerd[1956]: time="2025-03-19T11:36:19.019073022Z" level=info msg="shim disconnected" id=a23910536abf14a3900bc45193f5f84690b321de3bd686058e2bc1dc9d3a79ee namespace=k8s.io Mar 19 11:36:19.019153 containerd[1956]: time="2025-03-19T11:36:19.019149654Z" level=warning msg="cleaning up after shim disconnected" id=a23910536abf14a3900bc45193f5f84690b321de3bd686058e2bc1dc9d3a79ee namespace=k8s.io Mar 19 11:36:19.019675 containerd[1956]: time="2025-03-19T11:36:19.019172574Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:36:19.590603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a23910536abf14a3900bc45193f5f84690b321de3bd686058e2bc1dc9d3a79ee-rootfs.mount: Deactivated successfully. Mar 19 11:36:19.813253 containerd[1956]: time="2025-03-19T11:36:19.813184042Z" level=info msg="CreateContainer within sandbox \"e2ccb628ddb6360a8ed4aa8943a289dc93ff3615a3c38147b1595ca5e9346929\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 19 11:36:19.847780 containerd[1956]: time="2025-03-19T11:36:19.847599142Z" level=info msg="CreateContainer within sandbox \"e2ccb628ddb6360a8ed4aa8943a289dc93ff3615a3c38147b1595ca5e9346929\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f5792788eae359d2815276d5b94a1342ce62558f062855773dafec799de796e3\"" Mar 19 11:36:19.850129 containerd[1956]: time="2025-03-19T11:36:19.850063342Z" level=info msg="StartContainer for \"f5792788eae359d2815276d5b94a1342ce62558f062855773dafec799de796e3\"" Mar 19 11:36:19.913656 systemd[1]: Started cri-containerd-f5792788eae359d2815276d5b94a1342ce62558f062855773dafec799de796e3.scope - libcontainer container f5792788eae359d2815276d5b94a1342ce62558f062855773dafec799de796e3. Mar 19 11:36:19.968185 systemd[1]: cri-containerd-f5792788eae359d2815276d5b94a1342ce62558f062855773dafec799de796e3.scope: Deactivated successfully. Mar 19 11:36:19.972184 containerd[1956]: time="2025-03-19T11:36:19.972119002Z" level=info msg="StartContainer for \"f5792788eae359d2815276d5b94a1342ce62558f062855773dafec799de796e3\" returns successfully" Mar 19 11:36:20.018464 containerd[1956]: time="2025-03-19T11:36:20.018170479Z" level=info msg="shim disconnected" id=f5792788eae359d2815276d5b94a1342ce62558f062855773dafec799de796e3 namespace=k8s.io Mar 19 11:36:20.018464 containerd[1956]: time="2025-03-19T11:36:20.018243475Z" level=warning msg="cleaning up after shim disconnected" id=f5792788eae359d2815276d5b94a1342ce62558f062855773dafec799de796e3 namespace=k8s.io Mar 19 11:36:20.018464 containerd[1956]: time="2025-03-19T11:36:20.018262711Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:36:20.589150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5792788eae359d2815276d5b94a1342ce62558f062855773dafec799de796e3-rootfs.mount: Deactivated successfully. Mar 19 11:36:20.820989 containerd[1956]: time="2025-03-19T11:36:20.819971615Z" level=info msg="CreateContainer within sandbox \"e2ccb628ddb6360a8ed4aa8943a289dc93ff3615a3c38147b1595ca5e9346929\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 19 11:36:20.854395 containerd[1956]: time="2025-03-19T11:36:20.853965575Z" level=info msg="CreateContainer within sandbox \"e2ccb628ddb6360a8ed4aa8943a289dc93ff3615a3c38147b1595ca5e9346929\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2489e5589f5db3525e43e7bf609164b4dd5719de23213f7a640a358e5a87b82e\"" Mar 19 11:36:20.855407 containerd[1956]: time="2025-03-19T11:36:20.855226751Z" level=info msg="StartContainer for \"2489e5589f5db3525e43e7bf609164b4dd5719de23213f7a640a358e5a87b82e\"" Mar 19 11:36:20.918634 systemd[1]: Started cri-containerd-2489e5589f5db3525e43e7bf609164b4dd5719de23213f7a640a358e5a87b82e.scope - libcontainer container 2489e5589f5db3525e43e7bf609164b4dd5719de23213f7a640a358e5a87b82e. Mar 19 11:36:20.975862 containerd[1956]: time="2025-03-19T11:36:20.975753299Z" level=info msg="StartContainer for \"2489e5589f5db3525e43e7bf609164b4dd5719de23213f7a640a358e5a87b82e\" returns successfully" Mar 19 11:36:21.899486 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 19 11:36:21.905168 kubelet[3329]: I0319 11:36:21.904038 3329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4k8st" podStartSLOduration=7.904011276 podStartE2EDuration="7.904011276s" podCreationTimestamp="2025-03-19 11:36:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:36:21.902168784 +0000 UTC m=+118.850189175" watchObservedRunningTime="2025-03-19 11:36:21.904011276 +0000 UTC m=+118.852031667" Mar 19 11:36:23.252188 containerd[1956]: time="2025-03-19T11:36:23.251921399Z" level=info msg="StopPodSandbox for \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\"" Mar 19 11:36:23.252188 containerd[1956]: time="2025-03-19T11:36:23.252062915Z" level=info msg="TearDown network for sandbox \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\" successfully" Mar 19 11:36:23.252188 containerd[1956]: time="2025-03-19T11:36:23.252085247Z" level=info msg="StopPodSandbox for \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\" returns successfully" Mar 19 11:36:23.253211 containerd[1956]: time="2025-03-19T11:36:23.253164011Z" level=info msg="RemovePodSandbox for \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\"" Mar 19 11:36:23.253284 containerd[1956]: time="2025-03-19T11:36:23.253243031Z" level=info msg="Forcibly stopping sandbox \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\"" Mar 19 11:36:23.253437 containerd[1956]: time="2025-03-19T11:36:23.253404227Z" level=info msg="TearDown network for sandbox \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\" successfully" Mar 19 11:36:23.259909 containerd[1956]: time="2025-03-19T11:36:23.259840403Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:36:23.260072 containerd[1956]: time="2025-03-19T11:36:23.259937939Z" level=info msg="RemovePodSandbox \"54a14875de3e90abfcb2244aa4840f33144b70f12604b56161ef216eaa303328\" returns successfully" Mar 19 11:36:23.260715 containerd[1956]: time="2025-03-19T11:36:23.260657771Z" level=info msg="StopPodSandbox for \"01ffe4621a1515764783d6b96c462118bf085b576cfb6961b62b4676c2616541\"" Mar 19 11:36:23.260838 containerd[1956]: time="2025-03-19T11:36:23.260794559Z" level=info msg="TearDown network for sandbox \"01ffe4621a1515764783d6b96c462118bf085b576cfb6961b62b4676c2616541\" successfully" Mar 19 11:36:23.260838 containerd[1956]: time="2025-03-19T11:36:23.260818415Z" level=info msg="StopPodSandbox for \"01ffe4621a1515764783d6b96c462118bf085b576cfb6961b62b4676c2616541\" returns successfully" Mar 19 11:36:23.261559 containerd[1956]: time="2025-03-19T11:36:23.261506831Z" level=info msg="RemovePodSandbox for \"01ffe4621a1515764783d6b96c462118bf085b576cfb6961b62b4676c2616541\"" Mar 19 11:36:23.261670 containerd[1956]: time="2025-03-19T11:36:23.261555047Z" level=info msg="Forcibly stopping sandbox \"01ffe4621a1515764783d6b96c462118bf085b576cfb6961b62b4676c2616541\"" Mar 19 11:36:23.261670 containerd[1956]: time="2025-03-19T11:36:23.261653855Z" level=info msg="TearDown network for sandbox \"01ffe4621a1515764783d6b96c462118bf085b576cfb6961b62b4676c2616541\" successfully" Mar 19 11:36:23.268475 containerd[1956]: time="2025-03-19T11:36:23.268396715Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01ffe4621a1515764783d6b96c462118bf085b576cfb6961b62b4676c2616541\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:36:23.268695 containerd[1956]: time="2025-03-19T11:36:23.268478039Z" level=info msg="RemovePodSandbox \"01ffe4621a1515764783d6b96c462118bf085b576cfb6961b62b4676c2616541\" returns successfully" Mar 19 11:36:24.201160 systemd[1]: run-containerd-runc-k8s.io-2489e5589f5db3525e43e7bf609164b4dd5719de23213f7a640a358e5a87b82e-runc.gFlr1y.mount: Deactivated successfully. Mar 19 11:36:26.271445 systemd-networkd[1848]: lxc_health: Link UP Mar 19 11:36:26.291240 (udev-worker)[5989]: Network interface NamePolicy= disabled on kernel command line. Mar 19 11:36:26.321321 systemd-networkd[1848]: lxc_health: Gained carrier Mar 19 11:36:27.501570 systemd-networkd[1848]: lxc_health: Gained IPv6LL Mar 19 11:36:29.706626 ntpd[1924]: Listen normally on 15 lxc_health [fe80::4c3a:39ff:fe5b:f9fa%14]:123 Mar 19 11:36:29.707190 ntpd[1924]: 19 Mar 11:36:29 ntpd[1924]: Listen normally on 15 lxc_health [fe80::4c3a:39ff:fe5b:f9fa%14]:123 Mar 19 11:36:31.168390 systemd[1]: run-containerd-runc-k8s.io-2489e5589f5db3525e43e7bf609164b4dd5719de23213f7a640a358e5a87b82e-runc.lYZcqZ.mount: Deactivated successfully. Mar 19 11:36:31.304804 sshd[5163]: Connection closed by 139.178.68.195 port 59720 Mar 19 11:36:31.305112 sshd-session[5161]: pam_unix(sshd:session): session closed for user core Mar 19 11:36:31.312282 systemd[1]: sshd@30-172.31.31.152:22-139.178.68.195:59720.service: Deactivated successfully. Mar 19 11:36:31.317271 systemd[1]: session-31.scope: Deactivated successfully. Mar 19 11:36:31.323497 systemd-logind[1931]: Session 31 logged out. Waiting for processes to exit. Mar 19 11:36:31.327521 systemd-logind[1931]: Removed session 31. Mar 19 11:36:46.249258 kubelet[3329]: E0319 11:36:46.248828 3329 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-152?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 11:36:46.311404 systemd[1]: cri-containerd-18d5f8b4e7fd49e798697a20ceabf489adcf5541112494c52b9b68179d57bd47.scope: Deactivated successfully. Mar 19 11:36:46.313546 systemd[1]: cri-containerd-18d5f8b4e7fd49e798697a20ceabf489adcf5541112494c52b9b68179d57bd47.scope: Consumed 5.006s CPU time, 53.5M memory peak. Mar 19 11:36:46.358704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18d5f8b4e7fd49e798697a20ceabf489adcf5541112494c52b9b68179d57bd47-rootfs.mount: Deactivated successfully. Mar 19 11:36:46.365079 containerd[1956]: time="2025-03-19T11:36:46.364990522Z" level=info msg="shim disconnected" id=18d5f8b4e7fd49e798697a20ceabf489adcf5541112494c52b9b68179d57bd47 namespace=k8s.io Mar 19 11:36:46.365079 containerd[1956]: time="2025-03-19T11:36:46.365072038Z" level=warning msg="cleaning up after shim disconnected" id=18d5f8b4e7fd49e798697a20ceabf489adcf5541112494c52b9b68179d57bd47 namespace=k8s.io Mar 19 11:36:46.365921 containerd[1956]: time="2025-03-19T11:36:46.365093554Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:36:46.898484 kubelet[3329]: I0319 11:36:46.898069 3329 scope.go:117] "RemoveContainer" containerID="18d5f8b4e7fd49e798697a20ceabf489adcf5541112494c52b9b68179d57bd47" Mar 19 11:36:46.901143 containerd[1956]: time="2025-03-19T11:36:46.901094496Z" level=info msg="CreateContainer within sandbox \"aedb254a9e25d98e027a09cece6a16fc95a5a1cb56d70b3fd1c24f8d80ec8e9b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 19 11:36:46.926634 containerd[1956]: time="2025-03-19T11:36:46.926502120Z" level=info msg="CreateContainer within sandbox \"aedb254a9e25d98e027a09cece6a16fc95a5a1cb56d70b3fd1c24f8d80ec8e9b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0503d099ed10078071cd29a03fce689e3887c7b974d1ad8819809e9872838473\"" Mar 19 11:36:46.927372 containerd[1956]: time="2025-03-19T11:36:46.927145848Z" level=info msg="StartContainer for \"0503d099ed10078071cd29a03fce689e3887c7b974d1ad8819809e9872838473\"" Mar 19 11:36:46.981662 systemd[1]: Started cri-containerd-0503d099ed10078071cd29a03fce689e3887c7b974d1ad8819809e9872838473.scope - libcontainer container 0503d099ed10078071cd29a03fce689e3887c7b974d1ad8819809e9872838473. Mar 19 11:36:47.051590 containerd[1956]: time="2025-03-19T11:36:47.051513741Z" level=info msg="StartContainer for \"0503d099ed10078071cd29a03fce689e3887c7b974d1ad8819809e9872838473\" returns successfully" Mar 19 11:36:50.128506 systemd[1]: cri-containerd-05d6d0f292b81ba8711813cb05bf60d737f7f982fc038be61310e419ab72027d.scope: Deactivated successfully. Mar 19 11:36:50.129072 systemd[1]: cri-containerd-05d6d0f292b81ba8711813cb05bf60d737f7f982fc038be61310e419ab72027d.scope: Consumed 3.407s CPU time, 20.4M memory peak. Mar 19 11:36:50.169893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05d6d0f292b81ba8711813cb05bf60d737f7f982fc038be61310e419ab72027d-rootfs.mount: Deactivated successfully. Mar 19 11:36:50.183638 containerd[1956]: time="2025-03-19T11:36:50.183557173Z" level=info msg="shim disconnected" id=05d6d0f292b81ba8711813cb05bf60d737f7f982fc038be61310e419ab72027d namespace=k8s.io Mar 19 11:36:50.184412 containerd[1956]: time="2025-03-19T11:36:50.183654697Z" level=warning msg="cleaning up after shim disconnected" id=05d6d0f292b81ba8711813cb05bf60d737f7f982fc038be61310e419ab72027d namespace=k8s.io Mar 19 11:36:50.184412 containerd[1956]: time="2025-03-19T11:36:50.183676909Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:36:50.913925 kubelet[3329]: I0319 11:36:50.913616 3329 scope.go:117] "RemoveContainer" containerID="05d6d0f292b81ba8711813cb05bf60d737f7f982fc038be61310e419ab72027d" Mar 19 11:36:50.916170 containerd[1956]: time="2025-03-19T11:36:50.916081300Z" level=info msg="CreateContainer within sandbox \"f4bb3ea63efbf712d503807d38cc9927bdb7d2382805290348ebe01d82cae10b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 19 11:36:50.946082 containerd[1956]: time="2025-03-19T11:36:50.945943360Z" level=info msg="CreateContainer within sandbox \"f4bb3ea63efbf712d503807d38cc9927bdb7d2382805290348ebe01d82cae10b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"351d1fae29f143f5febbd7f927454971ee560e686c5d68eadba9e6778e5c59f9\"" Mar 19 11:36:50.947649 containerd[1956]: time="2025-03-19T11:36:50.946996852Z" level=info msg="StartContainer for \"351d1fae29f143f5febbd7f927454971ee560e686c5d68eadba9e6778e5c59f9\"" Mar 19 11:36:50.997942 systemd[1]: Started cri-containerd-351d1fae29f143f5febbd7f927454971ee560e686c5d68eadba9e6778e5c59f9.scope - libcontainer container 351d1fae29f143f5febbd7f927454971ee560e686c5d68eadba9e6778e5c59f9. Mar 19 11:36:51.061865 containerd[1956]: time="2025-03-19T11:36:51.061803829Z" level=info msg="StartContainer for \"351d1fae29f143f5febbd7f927454971ee560e686c5d68eadba9e6778e5c59f9\" returns successfully" Mar 19 11:36:56.250071 kubelet[3329]: E0319 11:36:56.249716 3329 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-152?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"