Mar 17 18:18:23.953100 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 17 18:18:23.953144 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Mar 17 17:11:44 -00 2025 Mar 17 18:18:23.953167 kernel: efi: EFI v2.70 by EDK II Mar 17 18:18:23.953182 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7171cf98 Mar 17 18:18:23.953196 kernel: ACPI: Early table checksum verification disabled Mar 17 18:18:23.953209 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 17 18:18:23.953225 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 17 18:18:23.953240 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 17 18:18:23.953254 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Mar 17 18:18:23.953268 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 17 18:18:23.953286 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 17 18:18:23.953301 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 17 18:18:23.953315 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 17 18:18:23.953329 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 17 18:18:23.953345 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 17 18:18:23.953364 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 17 18:18:23.953379 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 17 18:18:23.953394 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 17 18:18:23.953408 kernel: printk: bootconsole [uart0] enabled Mar 17 18:18:23.953423 kernel: NUMA: Failed to initialise from firmware Mar 17 18:18:23.953438 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 18:18:23.953453 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Mar 17 18:18:23.953468 kernel: Zone ranges: Mar 17 18:18:23.953483 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 17 18:18:23.953497 kernel: DMA32 empty Mar 17 18:18:23.953513 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 17 18:18:23.953535 kernel: Movable zone start for each node Mar 17 18:18:23.953551 kernel: Early memory node ranges Mar 17 18:18:23.953566 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 17 18:18:23.953580 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 17 18:18:23.953595 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 17 18:18:23.953609 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 17 18:18:23.953624 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 17 18:18:23.953639 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 17 18:18:23.953653 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 17 18:18:23.953668 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 17 18:18:23.953682 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 18:18:23.953697 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 17 18:18:23.953716 kernel: psci: probing for conduit method from ACPI. Mar 17 18:18:23.953730 kernel: psci: PSCIv1.0 detected in firmware. Mar 17 18:18:23.953751 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 18:18:23.953767 kernel: psci: Trusted OS migration not required Mar 17 18:18:23.953783 kernel: psci: SMC Calling Convention v1.1 Mar 17 18:18:23.953846 kernel: ACPI: SRAT not present Mar 17 18:18:23.953866 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Mar 17 18:18:23.953882 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Mar 17 18:18:23.953899 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 18:18:23.953914 kernel: Detected PIPT I-cache on CPU0 Mar 17 18:18:23.953931 kernel: CPU features: detected: GIC system register CPU interface Mar 17 18:18:23.953946 kernel: CPU features: detected: Spectre-v2 Mar 17 18:18:23.953962 kernel: CPU features: detected: Spectre-v3a Mar 17 18:18:23.953977 kernel: CPU features: detected: Spectre-BHB Mar 17 18:18:23.953993 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 18:18:23.954009 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 18:18:23.954030 kernel: CPU features: detected: ARM erratum 1742098 Mar 17 18:18:23.954046 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 17 18:18:23.954062 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 17 18:18:23.954077 kernel: Policy zone: Normal Mar 17 18:18:23.954095 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:18:23.954112 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:18:23.954127 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:18:23.954143 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:18:23.954158 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:18:23.954174 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 17 18:18:23.954194 kernel: Memory: 3824524K/4030464K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 205940K reserved, 0K cma-reserved) Mar 17 18:18:23.954211 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 18:18:23.954226 kernel: trace event string verifier disabled Mar 17 18:18:23.954242 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 18:18:23.954258 kernel: rcu: RCU event tracing is enabled. Mar 17 18:18:23.954274 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 18:18:23.954290 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 18:18:23.954306 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:18:23.954322 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:18:23.954338 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 18:18:23.954353 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 18:18:23.954368 kernel: GICv3: 96 SPIs implemented Mar 17 18:18:23.954388 kernel: GICv3: 0 Extended SPIs implemented Mar 17 18:18:23.954403 kernel: GICv3: Distributor has no Range Selector support Mar 17 18:18:23.954419 kernel: Root IRQ handler: gic_handle_irq Mar 17 18:18:23.954434 kernel: GICv3: 16 PPIs implemented Mar 17 18:18:23.954449 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 17 18:18:23.954465 kernel: ACPI: SRAT not present Mar 17 18:18:23.954479 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 17 18:18:23.954495 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Mar 17 18:18:23.954510 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Mar 17 18:18:23.954526 kernel: GICv3: using LPI property table @0x00000004000b0000 Mar 17 18:18:23.954541 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 17 18:18:23.954561 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Mar 17 18:18:23.954577 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 17 18:18:23.954592 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 17 18:18:23.954608 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 17 18:18:23.954623 kernel: Console: colour dummy device 80x25 Mar 17 18:18:23.954640 kernel: printk: console [tty1] enabled Mar 17 18:18:23.954656 kernel: ACPI: Core revision 20210730 Mar 17 18:18:23.954672 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 17 18:18:23.954688 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:18:23.954704 kernel: LSM: Security Framework initializing Mar 17 18:18:23.954723 kernel: SELinux: Initializing. Mar 17 18:18:23.954740 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:18:23.954756 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:18:23.954772 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:18:23.954787 kernel: Platform MSI: ITS@0x10080000 domain created Mar 17 18:18:23.954848 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 17 18:18:23.954867 kernel: Remapping and enabling EFI services. Mar 17 18:18:23.954882 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:18:23.954898 kernel: Detected PIPT I-cache on CPU1 Mar 17 18:18:23.954919 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 17 18:18:23.954935 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Mar 17 18:18:23.954951 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 17 18:18:23.954967 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 18:18:23.954983 kernel: SMP: Total of 2 processors activated. Mar 17 18:18:23.954999 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 18:18:23.955014 kernel: CPU features: detected: 32-bit EL1 Support Mar 17 18:18:23.955030 kernel: CPU features: detected: CRC32 instructions Mar 17 18:18:23.955046 kernel: CPU: All CPU(s) started at EL1 Mar 17 18:18:23.955062 kernel: alternatives: patching kernel code Mar 17 18:18:23.955082 kernel: devtmpfs: initialized Mar 17 18:18:23.955097 kernel: KASLR disabled due to lack of seed Mar 17 18:18:23.955123 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:18:23.955144 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 18:18:23.955160 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:18:23.955176 kernel: SMBIOS 3.0.0 present. Mar 17 18:18:23.955193 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 17 18:18:23.955209 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:18:23.955225 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 18:18:23.955242 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 18:18:23.955259 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 18:18:23.955279 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:18:23.955296 kernel: audit: type=2000 audit(0.254:1): state=initialized audit_enabled=0 res=1 Mar 17 18:18:23.955312 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:18:23.955328 kernel: cpuidle: using governor menu Mar 17 18:18:23.955345 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 18:18:23.955365 kernel: ASID allocator initialised with 32768 entries Mar 17 18:18:23.955381 kernel: ACPI: bus type PCI registered Mar 17 18:18:23.955398 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:18:23.955414 kernel: Serial: AMBA PL011 UART driver Mar 17 18:18:23.955430 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:18:23.955447 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 18:18:23.955464 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:18:23.955480 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 18:18:23.955496 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:18:23.955517 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 18:18:23.955534 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:18:23.955550 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:18:23.955566 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:18:23.955583 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:18:23.955599 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:18:23.955616 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:18:23.955632 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:18:23.955649 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:18:23.955668 kernel: ACPI: Interpreter enabled Mar 17 18:18:23.955685 kernel: ACPI: Using GIC for interrupt routing Mar 17 18:18:23.955701 kernel: ACPI: MCFG table detected, 1 entries Mar 17 18:18:23.955718 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Mar 17 18:18:23.956018 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:18:23.956212 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 18:18:23.956406 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 18:18:23.956591 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Mar 17 18:18:23.956780 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Mar 17 18:18:23.956824 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 17 18:18:23.956844 kernel: acpiphp: Slot [1] registered Mar 17 18:18:23.956860 kernel: acpiphp: Slot [2] registered Mar 17 18:18:23.956877 kernel: acpiphp: Slot [3] registered Mar 17 18:18:23.956893 kernel: acpiphp: Slot [4] registered Mar 17 18:18:23.956909 kernel: acpiphp: Slot [5] registered Mar 17 18:18:23.956925 kernel: acpiphp: Slot [6] registered Mar 17 18:18:23.956941 kernel: acpiphp: Slot [7] registered Mar 17 18:18:23.956962 kernel: acpiphp: Slot [8] registered Mar 17 18:18:23.956978 kernel: acpiphp: Slot [9] registered Mar 17 18:18:23.956995 kernel: acpiphp: Slot [10] registered Mar 17 18:18:23.957011 kernel: acpiphp: Slot [11] registered Mar 17 18:18:23.957027 kernel: acpiphp: Slot [12] registered Mar 17 18:18:23.957043 kernel: acpiphp: Slot [13] registered Mar 17 18:18:23.957059 kernel: acpiphp: Slot [14] registered Mar 17 18:18:23.957075 kernel: acpiphp: Slot [15] registered Mar 17 18:18:23.957092 kernel: acpiphp: Slot [16] registered Mar 17 18:18:23.957111 kernel: acpiphp: Slot [17] registered Mar 17 18:18:23.957128 kernel: acpiphp: Slot [18] registered Mar 17 18:18:23.957144 kernel: acpiphp: Slot [19] registered Mar 17 18:18:23.957161 kernel: acpiphp: Slot [20] registered Mar 17 18:18:23.957177 kernel: acpiphp: Slot [21] registered Mar 17 18:18:23.957193 kernel: acpiphp: Slot [22] registered Mar 17 18:18:23.957210 kernel: acpiphp: Slot [23] registered Mar 17 18:18:23.957226 kernel: acpiphp: Slot [24] registered Mar 17 18:18:23.957243 kernel: acpiphp: Slot [25] registered Mar 17 18:18:23.957259 kernel: acpiphp: Slot [26] registered Mar 17 18:18:23.957279 kernel: acpiphp: Slot [27] registered Mar 17 18:18:23.957295 kernel: acpiphp: Slot [28] registered Mar 17 18:18:23.957311 kernel: acpiphp: Slot [29] registered Mar 17 18:18:23.957328 kernel: acpiphp: Slot [30] registered Mar 17 18:18:23.957344 kernel: acpiphp: Slot [31] registered Mar 17 18:18:23.957360 kernel: PCI host bridge to bus 0000:00 Mar 17 18:18:23.957565 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 17 18:18:23.957742 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 18:18:23.957984 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 17 18:18:23.959761 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Mar 17 18:18:23.960036 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 17 18:18:23.960263 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 17 18:18:23.960461 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 17 18:18:23.960673 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 17 18:18:23.967945 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 17 18:18:23.968156 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 18:18:23.968364 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 17 18:18:23.968557 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 17 18:18:23.968752 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 17 18:18:23.970586 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 17 18:18:23.973841 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 18:18:23.974112 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Mar 17 18:18:23.974332 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Mar 17 18:18:23.974552 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Mar 17 18:18:23.974770 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Mar 17 18:18:23.975032 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Mar 17 18:18:23.975240 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 17 18:18:23.975438 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 18:18:23.975640 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 17 18:18:23.975663 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 18:18:23.975681 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 18:18:23.975698 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 18:18:23.975715 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 18:18:23.975732 kernel: iommu: Default domain type: Translated Mar 17 18:18:23.975749 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 18:18:23.975765 kernel: vgaarb: loaded Mar 17 18:18:23.975782 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:18:23.975820 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:18:23.975839 kernel: PTP clock support registered Mar 17 18:18:23.975856 kernel: Registered efivars operations Mar 17 18:18:23.975873 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 18:18:23.975890 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:18:23.975907 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:18:23.975924 kernel: pnp: PnP ACPI init Mar 17 18:18:23.976161 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 17 18:18:23.976191 kernel: pnp: PnP ACPI: found 1 devices Mar 17 18:18:23.976208 kernel: NET: Registered PF_INET protocol family Mar 17 18:18:23.976225 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:18:23.976242 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:18:23.976259 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:18:23.976276 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:18:23.976293 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 18:18:23.976310 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:18:23.976327 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:18:23.976348 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:18:23.976364 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:18:23.976381 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:18:23.976398 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 17 18:18:23.976414 kernel: kvm [1]: HYP mode not available Mar 17 18:18:23.976431 kernel: Initialise system trusted keyrings Mar 17 18:18:23.976448 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:18:23.976465 kernel: Key type asymmetric registered Mar 17 18:18:23.976481 kernel: Asymmetric key parser 'x509' registered Mar 17 18:18:23.976501 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:18:23.976518 kernel: io scheduler mq-deadline registered Mar 17 18:18:23.976534 kernel: io scheduler kyber registered Mar 17 18:18:23.976551 kernel: io scheduler bfq registered Mar 17 18:18:23.976771 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 17 18:18:23.976830 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 18:18:23.976853 kernel: ACPI: button: Power Button [PWRB] Mar 17 18:18:23.976870 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 17 18:18:23.976894 kernel: ACPI: button: Sleep Button [SLPB] Mar 17 18:18:23.976911 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:18:23.976929 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 17 18:18:23.977155 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 17 18:18:23.977178 kernel: printk: console [ttyS0] disabled Mar 17 18:18:23.977196 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 17 18:18:23.977213 kernel: printk: console [ttyS0] enabled Mar 17 18:18:23.977230 kernel: printk: bootconsole [uart0] disabled Mar 17 18:18:23.977246 kernel: thunder_xcv, ver 1.0 Mar 17 18:18:23.977268 kernel: thunder_bgx, ver 1.0 Mar 17 18:18:23.977284 kernel: nicpf, ver 1.0 Mar 17 18:18:23.977301 kernel: nicvf, ver 1.0 Mar 17 18:18:23.977539 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 18:18:23.977756 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T18:18:23 UTC (1742235503) Mar 17 18:18:23.977780 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 18:18:23.977813 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:18:23.977834 kernel: Segment Routing with IPv6 Mar 17 18:18:23.977851 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:18:23.977873 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:18:23.977890 kernel: Key type dns_resolver registered Mar 17 18:18:23.977906 kernel: registered taskstats version 1 Mar 17 18:18:23.977923 kernel: Loading compiled-in X.509 certificates Mar 17 18:18:23.977940 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: c6f3fb83dc6bb7052b07ec5b1ef41d12f9b3f7e4' Mar 17 18:18:23.977956 kernel: Key type .fscrypt registered Mar 17 18:18:23.977973 kernel: Key type fscrypt-provisioning registered Mar 17 18:18:23.977989 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:18:23.978006 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:18:23.978026 kernel: ima: No architecture policies found Mar 17 18:18:23.978042 kernel: clk: Disabling unused clocks Mar 17 18:18:23.978059 kernel: Freeing unused kernel memory: 36416K Mar 17 18:18:23.978076 kernel: Run /init as init process Mar 17 18:18:23.978092 kernel: with arguments: Mar 17 18:18:23.978109 kernel: /init Mar 17 18:18:23.978125 kernel: with environment: Mar 17 18:18:23.978141 kernel: HOME=/ Mar 17 18:18:23.978233 kernel: TERM=linux Mar 17 18:18:23.978257 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:18:23.978279 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:18:23.978301 systemd[1]: Detected virtualization amazon. Mar 17 18:18:23.978319 systemd[1]: Detected architecture arm64. Mar 17 18:18:23.978336 systemd[1]: Running in initrd. Mar 17 18:18:23.978354 systemd[1]: No hostname configured, using default hostname. Mar 17 18:18:23.978371 systemd[1]: Hostname set to . Mar 17 18:18:23.978393 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:18:23.978411 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:18:23.978429 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:18:23.978446 systemd[1]: Reached target cryptsetup.target. Mar 17 18:18:23.978464 systemd[1]: Reached target paths.target. Mar 17 18:18:23.978481 systemd[1]: Reached target slices.target. Mar 17 18:18:23.978498 systemd[1]: Reached target swap.target. Mar 17 18:18:23.978515 systemd[1]: Reached target timers.target. Mar 17 18:18:23.978538 systemd[1]: Listening on iscsid.socket. Mar 17 18:18:23.978557 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:18:23.978575 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:18:23.978592 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:18:23.978610 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:18:23.978628 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:18:23.978646 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:18:23.978663 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:18:23.978685 systemd[1]: Reached target sockets.target. Mar 17 18:18:23.978702 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:18:23.978720 systemd[1]: Finished network-cleanup.service. Mar 17 18:18:23.978738 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:18:23.978755 systemd[1]: Starting systemd-journald.service... Mar 17 18:18:23.978773 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:18:23.978791 systemd[1]: Starting systemd-resolved.service... Mar 17 18:18:23.978865 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:18:23.978884 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:18:23.978907 kernel: audit: type=1130 audit(1742235503.945:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:23.978925 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:18:23.978943 kernel: audit: type=1130 audit(1742235503.957:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:23.978961 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:18:23.978978 kernel: audit: type=1130 audit(1742235503.969:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:23.979000 systemd-journald[309]: Journal started Mar 17 18:18:23.979094 systemd-journald[309]: Runtime Journal (/run/log/journal/ec2e0f247ab0d469a9b89e8eb34a4bfa) is 8.0M, max 75.4M, 67.4M free. Mar 17 18:18:23.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:23.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:23.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:23.996862 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:18:23.996946 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:18:23.996972 systemd[1]: Started systemd-journald.service. Mar 17 18:18:23.981233 systemd-modules-load[310]: Inserted module 'overlay' Mar 17 18:18:24.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:24.030604 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:18:24.041082 kernel: audit: type=1130 audit(1742235504.007:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:24.041125 kernel: audit: type=1130 audit(1742235504.030:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:24.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:24.055406 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:18:24.072400 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:18:24.073857 kernel: audit: type=1130 audit(1742235504.056:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:24.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:24.058273 systemd-resolved[311]: Positive Trust Anchors: Mar 17 18:18:24.058287 systemd-resolved[311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:18:24.085979 kernel: Bridge firewalling registered Mar 17 18:18:24.058345 systemd-resolved[311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:18:24.066727 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:18:24.083146 systemd-modules-load[310]: Inserted module 'br_netfilter' Mar 17 18:18:24.109128 kernel: SCSI subsystem initialized Mar 17 18:18:24.129496 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:18:24.129567 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:18:24.133322 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:18:24.133999 dracut-cmdline[326]: dracut-dracut-053 Mar 17 18:18:24.141588 dracut-cmdline[326]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:18:24.163524 systemd-modules-load[310]: Inserted module 'dm_multipath' Mar 17 18:18:24.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:24.164940 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:18:24.177695 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:18:24.181968 kernel: audit: type=1130 audit(1742235504.165:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:24.207220 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:18:24.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:24.217938 kernel: audit: type=1130 audit(1742235504.207:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:24.288829 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:18:24.310856 kernel: iscsi: registered transport (tcp) Mar 17 18:18:24.337844 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:18:24.337914 kernel: QLogic iSCSI HBA Driver Mar 17 18:18:24.522388 systemd-resolved[311]: Defaulting to hostname 'linux'. Mar 17 18:18:24.525067 kernel: random: crng init done Mar 17 18:18:24.525773 systemd[1]: Started systemd-resolved.service. Mar 17 18:18:24.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:24.535484 systemd[1]: Reached target nss-lookup.target. Mar 17 18:18:24.538480 kernel: audit: type=1130 audit(1742235504.526:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:24.560558 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:18:24.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:24.566094 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:18:24.634864 kernel: raid6: neonx8 gen() 6323 MB/s Mar 17 18:18:24.652853 kernel: raid6: neonx8 xor() 4695 MB/s Mar 17 18:18:24.670854 kernel: raid6: neonx4 gen() 6432 MB/s Mar 17 18:18:24.688839 kernel: raid6: neonx4 xor() 4886 MB/s Mar 17 18:18:24.706851 kernel: raid6: neonx2 gen() 5733 MB/s Mar 17 18:18:24.724838 kernel: raid6: neonx2 xor() 4481 MB/s Mar 17 18:18:24.742849 kernel: raid6: neonx1 gen() 4454 MB/s Mar 17 18:18:24.760839 kernel: raid6: neonx1 xor() 3664 MB/s Mar 17 18:18:24.778848 kernel: raid6: int64x8 gen() 3407 MB/s Mar 17 18:18:24.796843 kernel: raid6: int64x8 xor() 2081 MB/s Mar 17 18:18:24.814849 kernel: raid6: int64x4 gen() 3773 MB/s Mar 17 18:18:24.832840 kernel: raid6: int64x4 xor() 2184 MB/s Mar 17 18:18:24.850850 kernel: raid6: int64x2 gen() 3558 MB/s Mar 17 18:18:24.868841 kernel: raid6: int64x2 xor() 1945 MB/s Mar 17 18:18:24.886848 kernel: raid6: int64x1 gen() 2761 MB/s Mar 17 18:18:24.906116 kernel: raid6: int64x1 xor() 1451 MB/s Mar 17 18:18:24.906166 kernel: raid6: using algorithm neonx4 gen() 6432 MB/s Mar 17 18:18:24.906190 kernel: raid6: .... xor() 4886 MB/s, rmw enabled Mar 17 18:18:24.907813 kernel: raid6: using neon recovery algorithm Mar 17 18:18:24.927671 kernel: xor: measuring software checksum speed Mar 17 18:18:24.927737 kernel: 8regs : 9297 MB/sec Mar 17 18:18:24.929834 kernel: 32regs : 10354 MB/sec Mar 17 18:18:24.929878 kernel: arm64_neon : 8854 MB/sec Mar 17 18:18:24.933044 kernel: xor: using function: 32regs (10354 MB/sec) Mar 17 18:18:25.024841 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Mar 17 18:18:25.043134 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:18:25.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:25.045000 audit: BPF prog-id=7 op=LOAD Mar 17 18:18:25.045000 audit: BPF prog-id=8 op=LOAD Mar 17 18:18:25.047770 systemd[1]: Starting systemd-udevd.service... Mar 17 18:18:25.075325 systemd-udevd[509]: Using default interface naming scheme 'v252'. Mar 17 18:18:25.086076 systemd[1]: Started systemd-udevd.service. Mar 17 18:18:25.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:25.092894 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:18:25.120492 dracut-pre-trigger[520]: rd.md=0: removing MD RAID activation Mar 17 18:18:25.182243 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:18:25.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:25.186885 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:18:25.295570 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:18:25.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:25.400840 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 18:18:25.405299 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 17 18:18:25.426925 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 17 18:18:25.427173 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 17 18:18:25.427411 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 17 18:18:25.427437 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:ff:99:fb:1c:37 Mar 17 18:18:25.427640 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 17 18:18:25.430699 (udev-worker)[572]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:18:25.438850 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 17 18:18:25.448774 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:18:25.448860 kernel: GPT:9289727 != 16777215 Mar 17 18:18:25.452037 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:18:25.453348 kernel: GPT:9289727 != 16777215 Mar 17 18:18:25.453386 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:18:25.456551 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 18:18:25.533843 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (559) Mar 17 18:18:25.568394 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:18:25.617428 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:18:25.663189 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:18:25.666527 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:18:25.679373 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:18:25.692039 systemd[1]: Starting disk-uuid.service... Mar 17 18:18:25.703081 disk-uuid[668]: Primary Header is updated. Mar 17 18:18:25.703081 disk-uuid[668]: Secondary Entries is updated. Mar 17 18:18:25.703081 disk-uuid[668]: Secondary Header is updated. Mar 17 18:18:25.713833 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 18:18:25.722838 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 18:18:25.731827 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 18:18:26.733846 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 18:18:26.734156 disk-uuid[669]: The operation has completed successfully. Mar 17 18:18:26.903468 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:18:26.905677 systemd[1]: Finished disk-uuid.service. Mar 17 18:18:26.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:26.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:26.927217 systemd[1]: Starting verity-setup.service... Mar 17 18:18:26.965902 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 18:18:27.065581 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:18:27.070324 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:18:27.073741 systemd[1]: Finished verity-setup.service. Mar 17 18:18:27.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:27.165836 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:18:27.166962 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:18:27.170087 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:18:27.174125 systemd[1]: Starting ignition-setup.service... Mar 17 18:18:27.184970 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:18:27.209809 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:18:27.209886 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 18:18:27.211929 kernel: BTRFS info (device nvme0n1p6): has skinny extents Mar 17 18:18:27.223827 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 18:18:27.242422 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:18:27.256859 systemd[1]: Finished ignition-setup.service. Mar 17 18:18:27.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:27.261248 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:18:27.341020 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:18:27.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:27.343000 audit: BPF prog-id=9 op=LOAD Mar 17 18:18:27.346394 systemd[1]: Starting systemd-networkd.service... Mar 17 18:18:27.395649 systemd-networkd[1181]: lo: Link UP Mar 17 18:18:27.395665 systemd-networkd[1181]: lo: Gained carrier Mar 17 18:18:27.396617 systemd-networkd[1181]: Enumeration completed Mar 17 18:18:27.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:27.397170 systemd[1]: Started systemd-networkd.service. Mar 17 18:18:27.401452 systemd[1]: Reached target network.target. Mar 17 18:18:27.403200 systemd-networkd[1181]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:18:27.411225 systemd[1]: Starting iscsiuio.service... Mar 17 18:18:27.416842 systemd-networkd[1181]: eth0: Link UP Mar 17 18:18:27.417113 systemd-networkd[1181]: eth0: Gained carrier Mar 17 18:18:27.425541 systemd[1]: Started iscsiuio.service. Mar 17 18:18:27.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:27.429643 systemd[1]: Starting iscsid.service... Mar 17 18:18:27.440212 iscsid[1186]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:18:27.440212 iscsid[1186]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:18:27.440212 iscsid[1186]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:18:27.440212 iscsid[1186]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:18:27.440212 iscsid[1186]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:18:27.440212 iscsid[1186]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:18:27.439077 systemd-networkd[1181]: eth0: DHCPv4 address 172.31.23.140/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 18:18:27.467075 systemd[1]: Started iscsid.service. Mar 17 18:18:27.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:27.470142 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:18:27.494304 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:18:27.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:27.496514 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:18:27.499090 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:18:27.502191 systemd[1]: Reached target remote-fs.target. Mar 17 18:18:27.508081 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:18:27.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:27.528080 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:18:28.020219 ignition[1113]: Ignition 2.14.0 Mar 17 18:18:28.020749 ignition[1113]: Stage: fetch-offline Mar 17 18:18:28.021127 ignition[1113]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:18:28.021191 ignition[1113]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:18:28.049336 ignition[1113]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:18:28.050507 ignition[1113]: Ignition finished successfully Mar 17 18:18:28.055072 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:18:28.067306 kernel: kauditd_printk_skb: 18 callbacks suppressed Mar 17 18:18:28.068667 kernel: audit: type=1130 audit(1742235508.055:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:28.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:28.059270 systemd[1]: Starting ignition-fetch.service... Mar 17 18:18:28.074891 ignition[1205]: Ignition 2.14.0 Mar 17 18:18:28.074918 ignition[1205]: Stage: fetch Mar 17 18:18:28.075219 ignition[1205]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:18:28.075279 ignition[1205]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:18:28.089687 ignition[1205]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:18:28.091941 ignition[1205]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:18:28.115957 ignition[1205]: INFO : PUT result: OK Mar 17 18:18:28.120326 ignition[1205]: DEBUG : parsed url from cmdline: "" Mar 17 18:18:28.120326 ignition[1205]: INFO : no config URL provided Mar 17 18:18:28.120326 ignition[1205]: INFO : reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:18:28.126160 ignition[1205]: INFO : no config at "/usr/lib/ignition/user.ign" Mar 17 18:18:28.126160 ignition[1205]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:18:28.130511 ignition[1205]: INFO : PUT result: OK Mar 17 18:18:28.130511 ignition[1205]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 17 18:18:28.136822 ignition[1205]: INFO : GET result: OK Mar 17 18:18:28.136822 ignition[1205]: DEBUG : parsing config with SHA512: ded1f0479d9b1586a49693b9292d62b1989762143652eaa8b6bf8cb28f069847806f1854185bc16cf8e813acd3514685c4f4933bcb8f96345db7e4d2e2f0cfb9 Mar 17 18:18:28.150294 unknown[1205]: fetched base config from "system" Mar 17 18:18:28.152120 unknown[1205]: fetched base config from "system" Mar 17 18:18:28.153866 unknown[1205]: fetched user config from "aws" Mar 17 18:18:28.156495 ignition[1205]: fetch: fetch complete Mar 17 18:18:28.156526 ignition[1205]: fetch: fetch passed Mar 17 18:18:28.156643 ignition[1205]: Ignition finished successfully Mar 17 18:18:28.163033 systemd[1]: Finished ignition-fetch.service. Mar 17 18:18:28.174615 kernel: audit: type=1130 audit(1742235508.161:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:28.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:28.167261 systemd[1]: Starting ignition-kargs.service... Mar 17 18:18:28.190186 ignition[1211]: Ignition 2.14.0 Mar 17 18:18:28.191853 ignition[1211]: Stage: kargs Mar 17 18:18:28.193287 ignition[1211]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:18:28.195597 ignition[1211]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:18:28.206230 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:18:28.208744 ignition[1211]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:18:28.212032 ignition[1211]: INFO : PUT result: OK Mar 17 18:18:28.217250 ignition[1211]: kargs: kargs passed Mar 17 18:18:28.217370 ignition[1211]: Ignition finished successfully Mar 17 18:18:28.221959 systemd[1]: Finished ignition-kargs.service. Mar 17 18:18:28.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:28.232986 kernel: audit: type=1130 audit(1742235508.220:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:28.226188 systemd[1]: Starting ignition-disks.service... Mar 17 18:18:28.242282 ignition[1217]: Ignition 2.14.0 Mar 17 18:18:28.242312 ignition[1217]: Stage: disks Mar 17 18:18:28.242634 ignition[1217]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:18:28.242691 ignition[1217]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:18:28.257098 ignition[1217]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:18:28.259325 ignition[1217]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:18:28.262283 ignition[1217]: INFO : PUT result: OK Mar 17 18:18:28.268290 ignition[1217]: disks: disks passed Mar 17 18:18:28.269885 ignition[1217]: Ignition finished successfully Mar 17 18:18:28.273024 systemd[1]: Finished ignition-disks.service. Mar 17 18:18:28.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:28.276163 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:18:28.284210 kernel: audit: type=1130 audit(1742235508.274:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:28.285865 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:18:28.287497 systemd[1]: Reached target local-fs.target. Mar 17 18:18:28.289066 systemd[1]: Reached target sysinit.target. Mar 17 18:18:28.293658 systemd[1]: Reached target basic.target. Mar 17 18:18:28.307243 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:18:28.353239 systemd-fsck[1225]: ROOT: clean, 623/553520 files, 56021/553472 blocks Mar 17 18:18:28.360886 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:18:28.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:28.365231 systemd[1]: Mounting sysroot.mount... Mar 17 18:18:28.373443 kernel: audit: type=1130 audit(1742235508.362:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:28.393852 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:18:28.395086 systemd[1]: Mounted sysroot.mount. Mar 17 18:18:28.397654 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:18:28.415066 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:18:28.418507 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 18:18:28.418628 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:18:28.418698 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:18:28.434722 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:18:28.454473 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:18:28.458755 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:18:28.478394 initrd-setup-root[1247]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:18:28.487843 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1242) Mar 17 18:18:28.494369 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:18:28.494433 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 18:18:28.497021 kernel: BTRFS info (device nvme0n1p6): has skinny extents Mar 17 18:18:28.499187 initrd-setup-root[1258]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:18:28.509711 initrd-setup-root[1279]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:18:28.515166 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 18:18:28.520365 initrd-setup-root[1289]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:18:28.528408 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:18:28.736031 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:18:28.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:28.741196 systemd[1]: Starting ignition-mount.service... Mar 17 18:18:28.749348 systemd[1]: Starting sysroot-boot.service... Mar 17 18:18:28.757834 kernel: audit: type=1130 audit(1742235508.737:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:28.763228 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Mar 17 18:18:28.763403 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Mar 17 18:18:28.789313 ignition[1308]: INFO : Ignition 2.14.0 Mar 17 18:18:28.791128 ignition[1308]: INFO : Stage: mount Mar 17 18:18:28.792956 ignition[1308]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:18:28.792956 ignition[1308]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:18:28.812269 systemd[1]: Finished sysroot-boot.service. Mar 17 18:18:28.815617 ignition[1308]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:18:28.829954 kernel: audit: type=1130 audit(1742235508.820:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:28.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:28.827326 systemd[1]: Finished ignition-mount.service. Mar 17 18:18:28.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:28.833144 ignition[1308]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:18:28.833144 ignition[1308]: INFO : PUT result: OK Mar 17 18:18:28.833144 ignition[1308]: INFO : mount: mount passed Mar 17 18:18:28.833144 ignition[1308]: INFO : Ignition finished successfully Mar 17 18:18:28.848308 kernel: audit: type=1130 audit(1742235508.830:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:28.834402 systemd[1]: Starting ignition-files.service... Mar 17 18:18:28.856753 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:18:28.882842 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1318) Mar 17 18:18:28.888091 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:18:28.888143 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 18:18:28.888166 kernel: BTRFS info (device nvme0n1p6): has skinny extents Mar 17 18:18:28.904833 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 18:18:28.911007 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:18:28.930411 ignition[1337]: INFO : Ignition 2.14.0 Mar 17 18:18:28.930411 ignition[1337]: INFO : Stage: files Mar 17 18:18:28.933673 ignition[1337]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:18:28.933673 ignition[1337]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:18:28.950915 ignition[1337]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:18:28.953650 ignition[1337]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:18:28.956699 ignition[1337]: INFO : PUT result: OK Mar 17 18:18:28.961992 ignition[1337]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:18:28.968215 ignition[1337]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:18:28.971202 ignition[1337]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:18:29.016356 ignition[1337]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:18:29.019054 ignition[1337]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:18:29.023714 unknown[1337]: wrote ssh authorized keys file for user: core Mar 17 18:18:29.025949 ignition[1337]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:18:29.029483 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 18:18:29.033160 ignition[1337]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 18:18:29.154530 ignition[1337]: INFO : GET result: OK Mar 17 18:18:29.313350 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 18:18:29.319596 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:18:29.319596 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:18:29.319596 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Mar 17 18:18:29.319596 ignition[1337]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:18:29.337424 ignition[1337]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3269783008" Mar 17 18:18:29.340181 ignition[1337]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3269783008": device or resource busy Mar 17 18:18:29.340181 ignition[1337]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3269783008", trying btrfs: device or resource busy Mar 17 18:18:29.340181 ignition[1337]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3269783008" Mar 17 18:18:29.340181 ignition[1337]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3269783008" Mar 17 18:18:29.362058 ignition[1337]: INFO : op(3): [started] unmounting "/mnt/oem3269783008" Mar 17 18:18:29.364453 ignition[1337]: INFO : op(3): [finished] unmounting "/mnt/oem3269783008" Mar 17 18:18:29.364453 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Mar 17 18:18:29.370029 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:18:29.373474 ignition[1337]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 18:18:29.378992 systemd-networkd[1181]: eth0: Gained IPv6LL Mar 17 18:18:29.832914 ignition[1337]: INFO : GET result: OK Mar 17 18:18:29.982064 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:18:29.986056 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:18:29.986056 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:18:29.986056 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:18:29.986056 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:18:29.986056 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:18:29.986056 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:18:29.986056 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:18:30.008928 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:18:30.008928 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:18:30.008928 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:18:30.008928 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Mar 17 18:18:30.008928 ignition[1337]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:18:30.037318 ignition[1337]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3722643937" Mar 17 18:18:30.037318 ignition[1337]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3722643937": device or resource busy Mar 17 18:18:30.037318 ignition[1337]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3722643937", trying btrfs: device or resource busy Mar 17 18:18:30.037318 ignition[1337]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3722643937" Mar 17 18:18:30.037318 ignition[1337]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3722643937" Mar 17 18:18:30.037318 ignition[1337]: INFO : op(6): [started] unmounting "/mnt/oem3722643937" Mar 17 18:18:30.037318 ignition[1337]: INFO : op(6): [finished] unmounting "/mnt/oem3722643937" Mar 17 18:18:30.037318 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Mar 17 18:18:30.037318 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Mar 17 18:18:30.037318 ignition[1337]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:18:30.071863 systemd[1]: mnt-oem3722643937.mount: Deactivated successfully. Mar 17 18:18:30.092150 ignition[1337]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1594055532" Mar 17 18:18:30.095346 ignition[1337]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1594055532": device or resource busy Mar 17 18:18:30.095346 ignition[1337]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1594055532", trying btrfs: device or resource busy Mar 17 18:18:30.095346 ignition[1337]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1594055532" Mar 17 18:18:30.105453 ignition[1337]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1594055532" Mar 17 18:18:30.105453 ignition[1337]: INFO : op(9): [started] unmounting "/mnt/oem1594055532" Mar 17 18:18:30.105453 ignition[1337]: INFO : op(9): [finished] unmounting "/mnt/oem1594055532" Mar 17 18:18:30.105453 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Mar 17 18:18:30.105453 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:18:30.105453 ignition[1337]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 17 18:18:30.501683 ignition[1337]: INFO : GET result: OK Mar 17 18:18:30.995308 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:18:30.999452 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Mar 17 18:18:31.003290 ignition[1337]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:18:31.017837 ignition[1337]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem160613002" Mar 17 18:18:31.021166 ignition[1337]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem160613002": device or resource busy Mar 17 18:18:31.021166 ignition[1337]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem160613002", trying btrfs: device or resource busy Mar 17 18:18:31.021166 ignition[1337]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem160613002" Mar 17 18:18:31.031099 ignition[1337]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem160613002" Mar 17 18:18:31.031099 ignition[1337]: INFO : op(c): [started] unmounting "/mnt/oem160613002" Mar 17 18:18:31.035852 ignition[1337]: INFO : op(c): [finished] unmounting "/mnt/oem160613002" Mar 17 18:18:31.039024 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Mar 17 18:18:31.039024 ignition[1337]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Mar 17 18:18:31.039024 ignition[1337]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Mar 17 18:18:31.039024 ignition[1337]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Mar 17 18:18:31.039024 ignition[1337]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Mar 17 18:18:31.039024 ignition[1337]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Mar 17 18:18:31.039024 ignition[1337]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Mar 17 18:18:31.039024 ignition[1337]: INFO : files: op(13): [started] processing unit "nvidia.service" Mar 17 18:18:31.039024 ignition[1337]: INFO : files: op(13): [finished] processing unit "nvidia.service" Mar 17 18:18:31.039024 ignition[1337]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Mar 17 18:18:31.039024 ignition[1337]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:18:31.039024 ignition[1337]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:18:31.039024 ignition[1337]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Mar 17 18:18:31.039024 ignition[1337]: INFO : files: op(16): [started] setting preset to enabled for "nvidia.service" Mar 17 18:18:31.039024 ignition[1337]: INFO : files: op(16): [finished] setting preset to enabled for "nvidia.service" Mar 17 18:18:31.039024 ignition[1337]: INFO : files: op(17): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:18:31.039024 ignition[1337]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:18:31.039024 ignition[1337]: INFO : files: op(18): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 18:18:31.039024 ignition[1337]: INFO : files: op(18): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 18:18:31.039024 ignition[1337]: INFO : files: op(19): [started] setting preset to enabled for "amazon-ssm-agent.service" Mar 17 18:18:31.095852 ignition[1337]: INFO : files: op(19): [finished] setting preset to enabled for "amazon-ssm-agent.service" Mar 17 18:18:31.109115 ignition[1337]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:18:31.112968 ignition[1337]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:18:31.112968 ignition[1337]: INFO : files: files passed Mar 17 18:18:31.112968 ignition[1337]: INFO : Ignition finished successfully Mar 17 18:18:31.123245 systemd[1]: Finished ignition-files.service. Mar 17 18:18:31.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.134842 kernel: audit: type=1130 audit(1742235511.124:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.138749 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:18:31.140619 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:18:31.142976 systemd[1]: Starting ignition-quench.service... Mar 17 18:18:31.158321 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:18:31.158581 systemd[1]: Finished ignition-quench.service. Mar 17 18:18:31.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.173853 kernel: audit: type=1130 audit(1742235511.159:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.182599 initrd-setup-root-after-ignition[1364]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:18:31.187177 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:18:31.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.190672 systemd[1]: Reached target ignition-complete.target. Mar 17 18:18:31.203176 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:18:31.236935 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:18:31.239095 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:18:31.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.242633 systemd[1]: Reached target initrd-fs.target. Mar 17 18:18:31.242880 systemd[1]: Reached target initrd.target. Mar 17 18:18:31.248705 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:18:31.252709 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:18:31.277774 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:18:31.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.281184 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:18:31.303164 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:18:31.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.304046 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:18:31.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.304517 systemd[1]: Stopped target timers.target. Mar 17 18:18:31.374097 ignition[1377]: INFO : Ignition 2.14.0 Mar 17 18:18:31.374097 ignition[1377]: INFO : Stage: umount Mar 17 18:18:31.374097 ignition[1377]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:18:31.374097 ignition[1377]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:18:31.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.305315 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:18:31.306095 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:18:31.306650 systemd[1]: Stopped target initrd.target. Mar 17 18:18:31.307488 systemd[1]: Stopped target basic.target. Mar 17 18:18:31.307744 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:18:31.422134 ignition[1377]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:18:31.422134 ignition[1377]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:18:31.308724 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:18:31.430227 ignition[1377]: INFO : PUT result: OK Mar 17 18:18:31.309390 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:18:31.310381 systemd[1]: Stopped target remote-fs.target. Mar 17 18:18:31.311036 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:18:31.311612 systemd[1]: Stopped target sysinit.target. Mar 17 18:18:31.312546 systemd[1]: Stopped target local-fs.target. Mar 17 18:18:31.440882 ignition[1377]: INFO : umount: umount passed Mar 17 18:18:31.440882 ignition[1377]: INFO : Ignition finished successfully Mar 17 18:18:31.313513 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:18:31.314424 systemd[1]: Stopped target swap.target. Mar 17 18:18:31.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.314651 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:18:31.314943 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:18:31.315890 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:18:31.316581 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:18:31.316863 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:18:31.317728 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:18:31.317995 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:18:31.318613 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:18:31.318877 systemd[1]: Stopped ignition-files.service. Mar 17 18:18:31.320935 systemd[1]: Stopping ignition-mount.service... Mar 17 18:18:31.321629 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:18:31.321926 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:18:31.349172 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:18:31.350850 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:18:31.351194 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:18:31.355408 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:18:31.355646 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:18:31.376640 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:18:31.376998 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:18:31.391173 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:18:31.443049 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:18:31.443538 systemd[1]: Stopped ignition-mount.service. Mar 17 18:18:31.459618 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:18:31.461481 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:18:31.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.486357 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:18:31.486481 systemd[1]: Stopped ignition-disks.service. Mar 17 18:18:31.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.489956 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:18:31.493396 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:18:31.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.496402 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 18:18:31.496500 systemd[1]: Stopped ignition-fetch.service. Mar 17 18:18:31.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.501266 systemd[1]: Stopped target network.target. Mar 17 18:18:31.504117 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:18:31.504248 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:18:31.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.509615 systemd[1]: Stopped target paths.target. Mar 17 18:18:31.512423 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:18:31.514668 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:18:31.518016 systemd[1]: Stopped target slices.target. Mar 17 18:18:31.520784 systemd[1]: Stopped target sockets.target. Mar 17 18:18:31.523684 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:18:31.523772 systemd[1]: Closed iscsid.socket. Mar 17 18:18:31.528162 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:18:31.528248 systemd[1]: Closed iscsiuio.socket. Mar 17 18:18:31.531225 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:18:31.534353 systemd[1]: Stopped ignition-setup.service. Mar 17 18:18:31.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.537309 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:18:31.537411 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:18:31.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.542657 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:18:31.545002 systemd-networkd[1181]: eth0: DHCPv6 lease lost Mar 17 18:18:31.548710 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:18:31.553686 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:18:31.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.559000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:18:31.561000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:18:31.553969 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:18:31.556326 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:18:31.556544 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:18:31.558999 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:18:31.559074 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:18:31.564570 systemd[1]: Stopping network-cleanup.service... Mar 17 18:18:31.576006 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:18:31.576156 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:18:31.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.581499 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:18:31.581639 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:18:31.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.586555 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:18:31.586686 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:18:31.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.592039 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:18:31.606619 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:18:31.614019 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:18:31.615518 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:18:31.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.619234 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:18:31.621022 systemd[1]: Stopped network-cleanup.service. Mar 17 18:18:31.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.624285 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:18:31.624387 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:18:31.628033 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:18:31.629341 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:18:31.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.631758 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:18:31.631980 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:18:31.635070 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:18:31.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:31.635180 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:18:31.637734 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:18:31.637890 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:18:31.641772 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:18:31.656597 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:18:31.657129 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:18:31.661841 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:18:31.662089 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:18:31.664391 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:18:31.669095 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:18:31.691769 systemd[1]: Switching root. Mar 17 18:18:31.726004 iscsid[1186]: iscsid shutting down. Mar 17 18:18:31.728002 systemd-journald[309]: Received SIGTERM from PID 1 (systemd). Mar 17 18:18:31.728106 systemd-journald[309]: Journal stopped Mar 17 18:18:37.952477 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:18:37.952715 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:18:37.952755 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:18:37.952808 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:18:37.952854 kernel: SELinux: policy capability open_perms=1 Mar 17 18:18:37.952888 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:18:37.952961 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:18:37.953010 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:18:37.953056 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:18:37.953088 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:18:37.955498 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:18:37.955563 systemd[1]: Successfully loaded SELinux policy in 130.637ms. Mar 17 18:18:37.955667 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.920ms. Mar 17 18:18:37.955705 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:18:37.955740 systemd[1]: Detected virtualization amazon. Mar 17 18:18:37.955772 systemd[1]: Detected architecture arm64. Mar 17 18:18:37.955860 systemd[1]: Detected first boot. Mar 17 18:18:37.955897 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:18:37.955930 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:18:37.955964 kernel: kauditd_printk_skb: 45 callbacks suppressed Mar 17 18:18:37.956007 kernel: audit: type=1400 audit(1742235513.160:84): avc: denied { associate } for pid=1410 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:18:37.956043 kernel: audit: type=1300 audit(1742235513.160:84): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458b2 a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1393 pid=1410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:18:37.956076 kernel: audit: type=1327 audit(1742235513.160:84): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:18:37.956107 kernel: audit: type=1400 audit(1742235513.167:85): avc: denied { associate } for pid=1410 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:18:37.956137 kernel: audit: type=1300 audit(1742235513.167:85): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145989 a2=1ed a3=0 items=2 ppid=1393 pid=1410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:18:37.956172 kernel: audit: type=1307 audit(1742235513.167:85): cwd="/" Mar 17 18:18:37.956203 kernel: audit: type=1302 audit(1742235513.167:85): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:18:37.956232 kernel: audit: type=1302 audit(1742235513.167:85): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:18:37.956261 kernel: audit: type=1327 audit(1742235513.167:85): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:18:37.956293 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:18:37.956327 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:18:37.956362 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:18:37.956396 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:18:37.956428 kernel: audit: type=1334 audit(1742235517.545:86): prog-id=12 op=LOAD Mar 17 18:18:37.956458 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:18:37.956491 systemd[1]: Stopped iscsiuio.service. Mar 17 18:18:37.956524 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:18:37.956554 systemd[1]: Stopped iscsid.service. Mar 17 18:18:37.956587 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 18:18:37.956619 systemd[1]: Stopped initrd-switch-root.service. Mar 17 18:18:37.956655 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 18:18:37.956689 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:18:37.956719 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:18:37.956749 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Mar 17 18:18:37.956780 systemd[1]: Created slice system-getty.slice. Mar 17 18:18:37.956837 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:18:37.956875 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:18:37.956912 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:18:37.956943 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:18:37.956974 systemd[1]: Created slice user.slice. Mar 17 18:18:37.957004 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:18:37.957034 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:18:37.957067 systemd[1]: Set up automount boot.automount. Mar 17 18:18:37.957105 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:18:37.957141 systemd[1]: Stopped target initrd-switch-root.target. Mar 17 18:18:37.957172 systemd[1]: Stopped target initrd-fs.target. Mar 17 18:18:37.957208 systemd[1]: Stopped target initrd-root-fs.target. Mar 17 18:18:37.957241 systemd[1]: Reached target integritysetup.target. Mar 17 18:18:37.957273 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:18:37.957305 systemd[1]: Reached target remote-fs.target. Mar 17 18:18:37.957337 systemd[1]: Reached target slices.target. Mar 17 18:18:37.957369 systemd[1]: Reached target swap.target. Mar 17 18:18:37.957399 systemd[1]: Reached target torcx.target. Mar 17 18:18:37.957429 systemd[1]: Reached target veritysetup.target. Mar 17 18:18:37.957459 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:18:37.957489 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:18:37.957527 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:18:37.957560 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:18:37.957590 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:18:37.957620 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:18:37.957663 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:18:37.957696 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:18:37.957727 systemd[1]: Mounting media.mount... Mar 17 18:18:37.963878 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:18:37.963938 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:18:37.963978 systemd[1]: Mounting tmp.mount... Mar 17 18:18:37.964009 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:18:37.964039 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:18:37.964073 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:18:37.964105 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:18:37.964136 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:18:37.964165 systemd[1]: Starting modprobe@drm.service... Mar 17 18:18:37.964197 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:18:37.964226 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:18:37.964260 systemd[1]: Starting modprobe@loop.service... Mar 17 18:18:37.964291 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:18:37.964321 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 18:18:37.964352 systemd[1]: Stopped systemd-fsck-root.service. Mar 17 18:18:37.964381 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 18:18:37.964412 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 18:18:37.964442 systemd[1]: Stopped systemd-journald.service. Mar 17 18:18:37.964470 systemd[1]: Starting systemd-journald.service... Mar 17 18:18:37.964501 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:18:37.964534 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:18:37.964566 kernel: loop: module loaded Mar 17 18:18:37.964596 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:18:37.964625 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:18:37.964656 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 18:18:37.964687 systemd[1]: Stopped verity-setup.service. Mar 17 18:18:37.964717 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:18:37.964745 kernel: fuse: init (API version 7.34) Mar 17 18:18:37.964773 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:18:37.964829 systemd[1]: Mounted media.mount. Mar 17 18:18:37.964862 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:18:37.964891 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:18:37.964920 systemd[1]: Mounted tmp.mount. Mar 17 18:18:37.964953 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:18:37.964987 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:18:37.965019 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:18:37.965050 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:18:37.965080 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:18:37.965109 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:18:37.965139 systemd[1]: Finished modprobe@drm.service. Mar 17 18:18:37.965170 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:18:37.965202 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:18:37.965233 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:18:37.965331 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:18:37.965369 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:18:37.965402 systemd[1]: Finished modprobe@loop.service. Mar 17 18:18:37.965454 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:18:37.965492 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:18:37.965528 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:18:37.967336 systemd-journald[1484]: Journal started Mar 17 18:18:37.967471 systemd-journald[1484]: Runtime Journal (/run/log/journal/ec2e0f247ab0d469a9b89e8eb34a4bfa) is 8.0M, max 75.4M, 67.4M free. Mar 17 18:18:32.676000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:18:32.901000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:18:32.901000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:18:32.901000 audit: BPF prog-id=10 op=LOAD Mar 17 18:18:32.901000 audit: BPF prog-id=10 op=UNLOAD Mar 17 18:18:32.901000 audit: BPF prog-id=11 op=LOAD Mar 17 18:18:32.901000 audit: BPF prog-id=11 op=UNLOAD Mar 17 18:18:33.160000 audit[1410]: AVC avc: denied { associate } for pid=1410 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:18:33.160000 audit[1410]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458b2 a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1393 pid=1410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:18:33.160000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:18:33.167000 audit[1410]: AVC avc: denied { associate } for pid=1410 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:18:33.167000 audit[1410]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145989 a2=1ed a3=0 items=2 ppid=1393 pid=1410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:18:33.167000 audit: CWD cwd="/" Mar 17 18:18:33.167000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:18:33.167000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:18:33.167000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:18:37.545000 audit: BPF prog-id=12 op=LOAD Mar 17 18:18:37.545000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:18:37.548000 audit: BPF prog-id=13 op=LOAD Mar 17 18:18:37.548000 audit: BPF prog-id=14 op=LOAD Mar 17 18:18:37.548000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:18:37.548000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:18:37.550000 audit: BPF prog-id=15 op=LOAD Mar 17 18:18:37.550000 audit: BPF prog-id=12 op=UNLOAD Mar 17 18:18:37.550000 audit: BPF prog-id=16 op=LOAD Mar 17 18:18:37.550000 audit: BPF prog-id=17 op=LOAD Mar 17 18:18:37.550000 audit: BPF prog-id=13 op=UNLOAD Mar 17 18:18:37.551000 audit: BPF prog-id=14 op=UNLOAD Mar 17 18:18:37.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.560000 audit: BPF prog-id=15 op=UNLOAD Mar 17 18:18:37.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.818000 audit: BPF prog-id=18 op=LOAD Mar 17 18:18:37.989384 systemd[1]: Started systemd-journald.service. Mar 17 18:18:37.818000 audit: BPF prog-id=19 op=LOAD Mar 17 18:18:37.818000 audit: BPF prog-id=20 op=LOAD Mar 17 18:18:37.818000 audit: BPF prog-id=16 op=UNLOAD Mar 17 18:18:37.818000 audit: BPF prog-id=17 op=UNLOAD Mar 17 18:18:37.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.945000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:18:37.945000 audit[1484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffe120e2a0 a2=4000 a3=1 items=0 ppid=1 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:18:37.945000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:18:37.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:37.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:33.149658 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:18:37.542982 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:18:33.158898 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:18:37.543003 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Mar 17 18:18:33.158953 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:18:37.554138 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 18:18:33.159023 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:33Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 17 18:18:37.972118 systemd[1]: Reached target network-pre.target. Mar 17 18:18:33.159049 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:33Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 17 18:18:37.978612 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:18:33.159141 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:33Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 17 18:18:37.988179 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:18:33.159175 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:33Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 17 18:18:37.989719 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:18:33.159646 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:33Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 17 18:18:33.159745 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:18:33.159780 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:18:33.161557 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 17 18:18:33.161646 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 17 18:18:33.161697 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 17 18:18:33.161738 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 17 18:18:33.161791 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 17 18:18:33.161892 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 17 18:18:36.662928 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:36Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:18:36.663454 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:36Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:18:36.663690 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:36Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:18:36.664180 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:36Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:18:36.664295 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:36Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 17 18:18:36.664430 /usr/lib/systemd/system-generators/torcx-generator[1410]: time="2025-03-17T18:18:36Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 17 18:18:38.002532 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:18:38.007769 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:18:38.009531 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:18:38.011820 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:18:38.013511 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:18:38.016002 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:18:38.022024 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:18:38.028466 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:18:38.037193 systemd-journald[1484]: Time spent on flushing to /var/log/journal/ec2e0f247ab0d469a9b89e8eb34a4bfa is 56.865ms for 1136 entries. Mar 17 18:18:38.037193 systemd-journald[1484]: System Journal (/var/log/journal/ec2e0f247ab0d469a9b89e8eb34a4bfa) is 8.0M, max 195.6M, 187.6M free. Mar 17 18:18:38.120923 systemd-journald[1484]: Received client request to flush runtime journal. Mar 17 18:18:38.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:38.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:38.066982 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:18:38.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:38.068998 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:18:38.083339 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:18:38.122458 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:18:38.132497 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:18:38.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:38.136728 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:18:38.178824 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:18:38.189934 kernel: kauditd_printk_skb: 51 callbacks suppressed Mar 17 18:18:38.191410 kernel: audit: type=1130 audit(1742235518.179:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:38.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:38.182787 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:18:38.198400 udevadm[1528]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 18:18:38.371385 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:18:38.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:38.381000 kernel: audit: type=1130 audit(1742235518.371:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:38.943593 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:18:38.947830 systemd[1]: Starting systemd-udevd.service... Mar 17 18:18:38.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:38.944000 audit: BPF prog-id=21 op=LOAD Mar 17 18:18:38.961605 kernel: audit: type=1130 audit(1742235518.944:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:38.961755 kernel: audit: type=1334 audit(1742235518.944:139): prog-id=21 op=LOAD Mar 17 18:18:38.961842 kernel: audit: type=1334 audit(1742235518.944:140): prog-id=22 op=LOAD Mar 17 18:18:38.944000 audit: BPF prog-id=22 op=LOAD Mar 17 18:18:38.944000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:18:38.966395 kernel: audit: type=1334 audit(1742235518.944:141): prog-id=7 op=UNLOAD Mar 17 18:18:38.966502 kernel: audit: type=1334 audit(1742235518.944:142): prog-id=8 op=UNLOAD Mar 17 18:18:38.944000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:18:39.000323 systemd-udevd[1529]: Using default interface naming scheme 'v252'. Mar 17 18:18:39.077404 systemd[1]: Started systemd-udevd.service. Mar 17 18:18:39.084236 systemd[1]: Starting systemd-networkd.service... Mar 17 18:18:39.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:39.108536 kernel: audit: type=1130 audit(1742235519.079:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:39.108679 kernel: audit: type=1334 audit(1742235519.081:144): prog-id=23 op=LOAD Mar 17 18:18:39.108726 kernel: audit: type=1334 audit(1742235519.102:145): prog-id=24 op=LOAD Mar 17 18:18:39.081000 audit: BPF prog-id=23 op=LOAD Mar 17 18:18:39.102000 audit: BPF prog-id=24 op=LOAD Mar 17 18:18:39.105000 audit: BPF prog-id=25 op=LOAD Mar 17 18:18:39.105000 audit: BPF prog-id=26 op=LOAD Mar 17 18:18:39.110943 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:18:39.177199 (udev-worker)[1541]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:18:39.215625 systemd[1]: Started systemd-userdbd.service. Mar 17 18:18:39.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:39.260054 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Mar 17 18:18:39.415112 systemd-networkd[1534]: lo: Link UP Mar 17 18:18:39.415624 systemd-networkd[1534]: lo: Gained carrier Mar 17 18:18:39.416701 systemd-networkd[1534]: Enumeration completed Mar 17 18:18:39.416919 systemd[1]: Started systemd-networkd.service. Mar 17 18:18:39.419439 systemd-networkd[1534]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:18:39.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:39.423317 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:18:39.430897 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:18:39.432174 systemd-networkd[1534]: eth0: Link UP Mar 17 18:18:39.432776 systemd-networkd[1534]: eth0: Gained carrier Mar 17 18:18:39.444155 systemd-networkd[1534]: eth0: DHCPv4 address 172.31.23.140/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 18:18:39.608230 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:18:39.615946 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:18:39.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:39.620586 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:18:39.695568 lvm[1648]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:18:39.733779 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:18:39.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:39.735884 systemd[1]: Reached target cryptsetup.target. Mar 17 18:18:39.739911 systemd[1]: Starting lvm2-activation.service... Mar 17 18:18:39.749035 lvm[1649]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:18:39.784696 systemd[1]: Finished lvm2-activation.service. Mar 17 18:18:39.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:39.786684 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:18:39.788542 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:18:39.788599 systemd[1]: Reached target local-fs.target. Mar 17 18:18:39.790499 systemd[1]: Reached target machines.target. Mar 17 18:18:39.795190 systemd[1]: Starting ldconfig.service... Mar 17 18:18:39.803679 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:18:39.804036 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:18:39.806525 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:18:39.810767 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:18:39.817275 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:18:39.821770 systemd[1]: Starting systemd-sysext.service... Mar 17 18:18:39.829926 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1651 (bootctl) Mar 17 18:18:39.833055 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:18:39.857933 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:18:39.876707 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:18:39.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:39.887512 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:18:39.887959 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:18:39.911925 kernel: loop0: detected capacity change from 0 to 189592 Mar 17 18:18:40.014395 systemd-fsck[1661]: fsck.fat 4.2 (2021-01-31) Mar 17 18:18:40.014395 systemd-fsck[1661]: /dev/nvme0n1p1: 236 files, 117179/258078 clusters Mar 17 18:18:40.017318 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:18:40.021256 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:18:40.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.028374 systemd[1]: Mounting boot.mount... Mar 17 18:18:40.047108 kernel: loop1: detected capacity change from 0 to 189592 Mar 17 18:18:40.059354 systemd[1]: Mounted boot.mount. Mar 17 18:18:40.089677 (sd-sysext)[1665]: Using extensions 'kubernetes'. Mar 17 18:18:40.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.091129 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:18:40.095512 (sd-sysext)[1665]: Merged extensions into '/usr'. Mar 17 18:18:40.140729 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:18:40.143062 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:18:40.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.149450 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:18:40.151330 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:18:40.155482 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:18:40.163670 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:18:40.168576 systemd[1]: Starting modprobe@loop.service... Mar 17 18:18:40.170309 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:18:40.170598 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:18:40.178198 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:18:40.181715 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:18:40.182246 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:18:40.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.185141 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:18:40.185467 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:18:40.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.188617 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:18:40.189273 systemd[1]: Finished modprobe@loop.service. Mar 17 18:18:40.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.192644 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:18:40.192977 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:18:40.195330 systemd[1]: Finished systemd-sysext.service. Mar 17 18:18:40.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.200190 systemd[1]: Starting ensure-sysext.service... Mar 17 18:18:40.205573 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:18:40.219988 systemd[1]: Reloading. Mar 17 18:18:40.289235 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:18:40.344374 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:18:40.372143 /usr/lib/systemd/system-generators/torcx-generator[1708]: time="2025-03-17T18:18:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:18:40.375359 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:18:40.375745 /usr/lib/systemd/system-generators/torcx-generator[1708]: time="2025-03-17T18:18:40Z" level=info msg="torcx already run" Mar 17 18:18:40.515311 systemd-networkd[1534]: eth0: Gained IPv6LL Mar 17 18:18:40.576483 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:18:40.576523 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:18:40.621914 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:18:40.780000 audit: BPF prog-id=27 op=LOAD Mar 17 18:18:40.780000 audit: BPF prog-id=18 op=UNLOAD Mar 17 18:18:40.780000 audit: BPF prog-id=28 op=LOAD Mar 17 18:18:40.780000 audit: BPF prog-id=29 op=LOAD Mar 17 18:18:40.780000 audit: BPF prog-id=19 op=UNLOAD Mar 17 18:18:40.780000 audit: BPF prog-id=20 op=UNLOAD Mar 17 18:18:40.782000 audit: BPF prog-id=30 op=LOAD Mar 17 18:18:40.782000 audit: BPF prog-id=23 op=UNLOAD Mar 17 18:18:40.785000 audit: BPF prog-id=31 op=LOAD Mar 17 18:18:40.785000 audit: BPF prog-id=32 op=LOAD Mar 17 18:18:40.785000 audit: BPF prog-id=21 op=UNLOAD Mar 17 18:18:40.785000 audit: BPF prog-id=22 op=UNLOAD Mar 17 18:18:40.787000 audit: BPF prog-id=33 op=LOAD Mar 17 18:18:40.787000 audit: BPF prog-id=24 op=UNLOAD Mar 17 18:18:40.788000 audit: BPF prog-id=34 op=LOAD Mar 17 18:18:40.788000 audit: BPF prog-id=35 op=LOAD Mar 17 18:18:40.788000 audit: BPF prog-id=25 op=UNLOAD Mar 17 18:18:40.788000 audit: BPF prog-id=26 op=UNLOAD Mar 17 18:18:40.798684 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:18:40.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.803465 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:18:40.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.813259 systemd[1]: Starting audit-rules.service... Mar 17 18:18:40.817594 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:18:40.823547 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:18:40.831000 audit: BPF prog-id=36 op=LOAD Mar 17 18:18:40.834981 systemd[1]: Starting systemd-resolved.service... Mar 17 18:18:40.837000 audit: BPF prog-id=37 op=LOAD Mar 17 18:18:40.842786 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:18:40.847042 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:18:40.871064 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:18:40.873989 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:18:40.880531 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:18:40.885005 systemd[1]: Starting modprobe@loop.service... Mar 17 18:18:40.886689 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:18:40.887088 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:18:40.897989 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:18:40.898347 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:18:40.898594 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:18:40.904740 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:18:40.911041 systemd[1]: Starting modprobe@drm.service... Mar 17 18:18:40.913144 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:18:40.913466 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:18:40.915294 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:18:40.917154 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:18:40.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.920348 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:18:40.923309 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:18:40.926020 systemd[1]: Finished ensure-sysext.service. Mar 17 18:18:40.926000 audit[1765]: SYSTEM_BOOT pid=1765 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.937668 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:18:40.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.941645 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:18:40.942006 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:18:40.944213 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:18:40.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.945963 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:18:40.946298 systemd[1]: Finished modprobe@loop.service. Mar 17 18:18:40.948428 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:18:40.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.959193 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:18:40.959523 systemd[1]: Finished modprobe@drm.service. Mar 17 18:18:40.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:40.972191 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:18:40.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:41.062689 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:18:41.065124 systemd[1]: Reached target time-set.target. Mar 17 18:18:41.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:18:41.093000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:18:41.093000 audit[1784]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdc8ac5e0 a2=420 a3=0 items=0 ppid=1759 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:18:41.093000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:18:41.095954 augenrules[1784]: No rules Mar 17 18:18:41.097657 systemd[1]: Finished audit-rules.service. Mar 17 18:18:41.109278 systemd-resolved[1763]: Positive Trust Anchors: Mar 17 18:18:41.109845 systemd-resolved[1763]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:18:41.110044 systemd-resolved[1763]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:18:41.178015 systemd-resolved[1763]: Defaulting to hostname 'linux'. Mar 17 18:18:41.181524 systemd[1]: Started systemd-resolved.service. Mar 17 18:18:41.183379 systemd[1]: Reached target network.target. Mar 17 18:18:41.185055 systemd[1]: Reached target network-online.target. Mar 17 18:18:41.186886 systemd[1]: Reached target nss-lookup.target. Mar 17 18:18:41.451443 ldconfig[1650]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:18:41.460599 systemd[1]: Finished ldconfig.service. Mar 17 18:18:41.464698 systemd[1]: Starting systemd-update-done.service... Mar 17 18:18:41.480525 systemd[1]: Finished systemd-update-done.service. Mar 17 18:18:41.482517 systemd[1]: Reached target sysinit.target. Mar 17 18:18:41.484363 systemd[1]: Started motdgen.path. Mar 17 18:18:41.486047 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:18:41.488626 systemd[1]: Started logrotate.timer. Mar 17 18:18:41.490434 systemd[1]: Started mdadm.timer. Mar 17 18:18:41.491926 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:18:41.493735 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:18:41.493829 systemd[1]: Reached target paths.target. Mar 17 18:18:41.495386 systemd[1]: Reached target timers.target. Mar 17 18:18:41.497905 systemd[1]: Listening on dbus.socket. Mar 17 18:18:41.501737 systemd[1]: Starting docker.socket... Mar 17 18:18:41.509072 systemd[1]: Listening on sshd.socket. Mar 17 18:18:41.511871 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:18:41.513050 systemd[1]: Listening on docker.socket. Mar 17 18:18:41.515100 systemd[1]: Reached target sockets.target. Mar 17 18:18:41.516959 systemd[1]: Reached target basic.target. Mar 17 18:18:41.518819 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:18:41.519086 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:18:41.531194 systemd[1]: Started amazon-ssm-agent.service. Mar 17 18:18:41.537098 systemd[1]: Starting containerd.service... Mar 17 18:18:41.541488 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Mar 17 18:18:41.547196 systemd[1]: Starting dbus.service... Mar 17 18:18:41.550887 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:18:41.555296 systemd[1]: Starting extend-filesystems.service... Mar 17 18:18:41.558253 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:18:41.562644 systemd[1]: Starting kubelet.service... Mar 17 18:18:41.567837 systemd[1]: Starting motdgen.service... Mar 17 18:18:41.587736 jq[1796]: false Mar 17 18:18:41.574655 systemd[1]: Started nvidia.service. Mar 17 18:18:41.579267 systemd[1]: Starting prepare-helm.service... Mar 17 18:18:41.585645 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:18:41.590200 systemd[1]: Starting sshd-keygen.service... Mar 17 18:18:41.605230 systemd[1]: Starting systemd-logind.service... Mar 17 18:18:41.644858 jq[1806]: true Mar 17 18:18:41.607077 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:18:41.607292 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:18:41.609153 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 18:18:41.611017 systemd[1]: Starting update-engine.service... Mar 17 18:18:41.618046 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:18:41.625530 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:18:41.626077 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:18:41.710151 jq[1809]: true Mar 17 18:18:41.655304 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:18:41.655680 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:18:41.774291 tar[1813]: linux-arm64/helm Mar 17 18:18:41.809650 dbus-daemon[1795]: [system] SELinux support is enabled Mar 17 18:18:41.812542 dbus-daemon[1795]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1534 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 17 18:18:41.809961 systemd[1]: Started dbus.service. Mar 17 18:18:41.818264 dbus-daemon[1795]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 18:18:41.815004 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:18:41.815057 systemd[1]: Reached target system-config.target. Mar 17 18:18:41.816935 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:18:41.816980 systemd[1]: Reached target user-config.target. Mar 17 18:18:41.828943 systemd[1]: Starting systemd-hostnamed.service... Mar 17 18:18:41.839390 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:18:41.839814 systemd[1]: Finished motdgen.service. Mar 17 18:18:41.847633 extend-filesystems[1797]: Found loop1 Mar 17 18:18:41.849629 extend-filesystems[1797]: Found nvme0n1 Mar 17 18:18:41.849629 extend-filesystems[1797]: Found nvme0n1p1 Mar 17 18:18:41.849629 extend-filesystems[1797]: Found nvme0n1p2 Mar 17 18:18:41.849629 extend-filesystems[1797]: Found nvme0n1p3 Mar 17 18:18:41.849629 extend-filesystems[1797]: Found usr Mar 17 18:18:41.849629 extend-filesystems[1797]: Found nvme0n1p4 Mar 17 18:18:41.849629 extend-filesystems[1797]: Found nvme0n1p6 Mar 17 18:18:41.849629 extend-filesystems[1797]: Found nvme0n1p7 Mar 17 18:18:41.849629 extend-filesystems[1797]: Found nvme0n1p9 Mar 17 18:18:41.849629 extend-filesystems[1797]: Checking size of /dev/nvme0n1p9 Mar 17 18:18:41.953834 extend-filesystems[1797]: Resized partition /dev/nvme0n1p9 Mar 17 18:18:41.981218 extend-filesystems[1852]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 18:18:41.994843 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Mar 17 18:18:42.083302 amazon-ssm-agent[1792]: 2025/03/17 18:18:42 Failed to load instance info from vault. RegistrationKey does not exist. Mar 17 18:18:42.084879 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Mar 17 18:18:42.100507 amazon-ssm-agent[1792]: Initializing new seelog logger Mar 17 18:18:42.102156 amazon-ssm-agent[1792]: New Seelog Logger Creation Complete Mar 17 18:18:42.103080 bash[1860]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:18:42.104817 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:18:42.107259 extend-filesystems[1852]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 17 18:18:42.107259 extend-filesystems[1852]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 18:18:42.107259 extend-filesystems[1852]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Mar 17 18:18:42.116521 extend-filesystems[1797]: Resized filesystem in /dev/nvme0n1p9 Mar 17 18:18:42.109639 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:18:42.110070 systemd[1]: Finished extend-filesystems.service. Mar 17 18:18:42.136884 amazon-ssm-agent[1792]: 2025/03/17 18:18:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 18:18:42.138738 amazon-ssm-agent[1792]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 18:18:42.141827 amazon-ssm-agent[1792]: 2025/03/17 18:18:42 processing appconfig overrides Mar 17 18:18:42.165672 update_engine[1805]: I0317 18:18:42.165194 1805 main.cc:92] Flatcar Update Engine starting Mar 17 18:18:42.184164 update_engine[1805]: I0317 18:18:42.183955 1805 update_check_scheduler.cc:74] Next update check in 3m41s Mar 17 18:18:42.185150 systemd[1]: Started update-engine.service. Mar 17 18:18:42.188212 env[1819]: time="2025-03-17T18:18:42.188134476Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:18:42.190106 systemd[1]: Started locksmithd.service. Mar 17 18:18:42.200406 systemd[1]: nvidia.service: Deactivated successfully. Mar 17 18:18:42.252873 dbus-daemon[1795]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 17 18:18:42.253201 systemd[1]: Started systemd-hostnamed.service. Mar 17 18:18:42.265522 dbus-daemon[1795]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1832 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 17 18:18:42.270222 systemd[1]: Starting polkit.service... Mar 17 18:18:42.327252 polkitd[1873]: Started polkitd version 121 Mar 17 18:18:42.401048 polkitd[1873]: Loading rules from directory /etc/polkit-1/rules.d Mar 17 18:18:42.401202 polkitd[1873]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 17 18:18:42.408774 systemd-logind[1804]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 18:18:42.408885 systemd-logind[1804]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 17 18:18:42.413848 systemd-logind[1804]: New seat seat0. Mar 17 18:18:42.417245 polkitd[1873]: Finished loading, compiling and executing 2 rules Mar 17 18:18:42.418035 dbus-daemon[1795]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 17 18:18:42.418289 systemd[1]: Started polkit.service. Mar 17 18:18:42.426541 polkitd[1873]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 17 18:18:42.430195 systemd[1]: Started systemd-logind.service. Mar 17 18:18:42.478008 env[1819]: time="2025-03-17T18:18:42.477932414Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:18:42.480761 env[1819]: time="2025-03-17T18:18:42.480696398Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:18:42.493154 env[1819]: time="2025-03-17T18:18:42.493054598Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:18:42.493385 env[1819]: time="2025-03-17T18:18:42.493343294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:18:42.494061 env[1819]: time="2025-03-17T18:18:42.493994978Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:18:42.495015 env[1819]: time="2025-03-17T18:18:42.494961242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:18:42.495258 env[1819]: time="2025-03-17T18:18:42.495217466Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:18:42.495389 env[1819]: time="2025-03-17T18:18:42.495358262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:18:42.495691 systemd-resolved[1763]: System hostname changed to 'ip-172-31-23-140'. Mar 17 18:18:42.495695 systemd-hostnamed[1832]: Hostname set to (transient) Mar 17 18:18:42.497587 env[1819]: time="2025-03-17T18:18:42.497534426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:18:42.498400 env[1819]: time="2025-03-17T18:18:42.498347054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:18:42.501015 env[1819]: time="2025-03-17T18:18:42.500953538Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:18:42.501221 env[1819]: time="2025-03-17T18:18:42.501185390Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:18:42.501529 env[1819]: time="2025-03-17T18:18:42.501491054Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:18:42.505206 env[1819]: time="2025-03-17T18:18:42.505075094Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:18:42.519301 env[1819]: time="2025-03-17T18:18:42.519238490Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:18:42.519546 env[1819]: time="2025-03-17T18:18:42.519503690Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:18:42.519705 env[1819]: time="2025-03-17T18:18:42.519669362Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:18:42.519998 env[1819]: time="2025-03-17T18:18:42.519943370Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:18:42.520257 env[1819]: time="2025-03-17T18:18:42.520219514Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:18:42.520426 env[1819]: time="2025-03-17T18:18:42.520393106Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:18:42.520553 env[1819]: time="2025-03-17T18:18:42.520523582Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:18:42.521301 env[1819]: time="2025-03-17T18:18:42.521233658Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:18:42.521514 env[1819]: time="2025-03-17T18:18:42.521478950Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:18:42.521679 env[1819]: time="2025-03-17T18:18:42.521648330Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:18:42.521826 env[1819]: time="2025-03-17T18:18:42.521770430Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:18:42.521965 env[1819]: time="2025-03-17T18:18:42.521931578Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:18:42.522294 env[1819]: time="2025-03-17T18:18:42.522262166Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:18:42.524161 env[1819]: time="2025-03-17T18:18:42.524107430Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:18:42.534623 env[1819]: time="2025-03-17T18:18:42.534560618Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:18:42.534896 env[1819]: time="2025-03-17T18:18:42.534858566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:18:42.535086 env[1819]: time="2025-03-17T18:18:42.535049738Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:18:42.535367 env[1819]: time="2025-03-17T18:18:42.535318826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:18:42.535543 env[1819]: time="2025-03-17T18:18:42.535507574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:18:42.535671 env[1819]: time="2025-03-17T18:18:42.535641842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:18:42.535789 env[1819]: time="2025-03-17T18:18:42.535760270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:18:42.535976 env[1819]: time="2025-03-17T18:18:42.535938902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:18:42.536175 env[1819]: time="2025-03-17T18:18:42.536135942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:18:42.536311 env[1819]: time="2025-03-17T18:18:42.536281502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:18:42.536438 env[1819]: time="2025-03-17T18:18:42.536402138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:18:42.536580 env[1819]: time="2025-03-17T18:18:42.536549402Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:18:42.537078 env[1819]: time="2025-03-17T18:18:42.537025670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:18:42.537297 env[1819]: time="2025-03-17T18:18:42.537259250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:18:42.537470 env[1819]: time="2025-03-17T18:18:42.537430622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:18:42.537612 env[1819]: time="2025-03-17T18:18:42.537579902Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:18:42.537748 env[1819]: time="2025-03-17T18:18:42.537708950Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:18:42.537893 env[1819]: time="2025-03-17T18:18:42.537863138Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:18:42.538075 env[1819]: time="2025-03-17T18:18:42.538035962Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:18:42.538374 env[1819]: time="2025-03-17T18:18:42.538233878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:18:42.540319 env[1819]: time="2025-03-17T18:18:42.540189230Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:18:42.542148 env[1819]: time="2025-03-17T18:18:42.542097614Z" level=info msg="Connect containerd service" Mar 17 18:18:42.542382 env[1819]: time="2025-03-17T18:18:42.542340758Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:18:42.545138 env[1819]: time="2025-03-17T18:18:42.545071550Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:18:42.548152 env[1819]: time="2025-03-17T18:18:42.548092754Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:18:42.552208 env[1819]: time="2025-03-17T18:18:42.552152966Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:18:42.552561 env[1819]: time="2025-03-17T18:18:42.552520730Z" level=info msg="containerd successfully booted in 0.377773s" Mar 17 18:18:42.552654 systemd[1]: Started containerd.service. Mar 17 18:18:42.559991 env[1819]: time="2025-03-17T18:18:42.558500702Z" level=info msg="Start subscribing containerd event" Mar 17 18:18:42.562357 env[1819]: time="2025-03-17T18:18:42.562284146Z" level=info msg="Start recovering state" Mar 17 18:18:42.562677 env[1819]: time="2025-03-17T18:18:42.562642910Z" level=info msg="Start event monitor" Mar 17 18:18:42.565009 env[1819]: time="2025-03-17T18:18:42.564917342Z" level=info msg="Start snapshots syncer" Mar 17 18:18:42.565205 env[1819]: time="2025-03-17T18:18:42.565003454Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:18:42.565205 env[1819]: time="2025-03-17T18:18:42.565068518Z" level=info msg="Start streaming server" Mar 17 18:18:42.879658 coreos-metadata[1794]: Mar 17 18:18:42.879 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 18:18:42.882167 coreos-metadata[1794]: Mar 17 18:18:42.880 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Mar 17 18:18:42.882167 coreos-metadata[1794]: Mar 17 18:18:42.881 INFO Fetch successful Mar 17 18:18:42.882167 coreos-metadata[1794]: Mar 17 18:18:42.881 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 18:18:42.884671 coreos-metadata[1794]: Mar 17 18:18:42.883 INFO Fetch successful Mar 17 18:18:42.887097 unknown[1794]: wrote ssh authorized keys file for user: core Mar 17 18:18:42.943514 update-ssh-keys[1949]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:18:42.945622 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Mar 17 18:18:43.086492 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO Create new startup processor Mar 17 18:18:43.091135 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [LongRunningPluginsManager] registered plugins: {} Mar 17 18:18:43.095184 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO Initializing bookkeeping folders Mar 17 18:18:43.095385 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO removing the completed state files Mar 17 18:18:43.095511 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO Initializing bookkeeping folders for long running plugins Mar 17 18:18:43.095637 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Mar 17 18:18:43.095762 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO Initializing healthcheck folders for long running plugins Mar 17 18:18:43.095913 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO Initializing locations for inventory plugin Mar 17 18:18:43.097566 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO Initializing default location for custom inventory Mar 17 18:18:43.097759 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO Initializing default location for file inventory Mar 17 18:18:43.097918 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO Initializing default location for role inventory Mar 17 18:18:43.098040 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO Init the cloudwatchlogs publisher Mar 17 18:18:43.098160 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [instanceID=i-0a7f375ea7e48d99e] Successfully loaded platform independent plugin aws:downloadContent Mar 17 18:18:43.098300 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [instanceID=i-0a7f375ea7e48d99e] Successfully loaded platform independent plugin aws:runDocument Mar 17 18:18:43.098435 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [instanceID=i-0a7f375ea7e48d99e] Successfully loaded platform independent plugin aws:softwareInventory Mar 17 18:18:43.098752 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [instanceID=i-0a7f375ea7e48d99e] Successfully loaded platform independent plugin aws:updateSsmAgent Mar 17 18:18:43.098931 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [instanceID=i-0a7f375ea7e48d99e] Successfully loaded platform independent plugin aws:configureDocker Mar 17 18:18:43.099068 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [instanceID=i-0a7f375ea7e48d99e] Successfully loaded platform independent plugin aws:refreshAssociation Mar 17 18:18:43.099188 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [instanceID=i-0a7f375ea7e48d99e] Successfully loaded platform independent plugin aws:runPowerShellScript Mar 17 18:18:43.099531 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [instanceID=i-0a7f375ea7e48d99e] Successfully loaded platform independent plugin aws:runDockerAction Mar 17 18:18:43.099694 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [instanceID=i-0a7f375ea7e48d99e] Successfully loaded platform independent plugin aws:configurePackage Mar 17 18:18:43.099840 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [instanceID=i-0a7f375ea7e48d99e] Successfully loaded platform dependent plugin aws:runShellScript Mar 17 18:18:43.099980 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Mar 17 18:18:43.100110 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO OS: linux, Arch: arm64 Mar 17 18:18:43.103851 amazon-ssm-agent[1792]: datastore file /var/lib/amazon/ssm/i-0a7f375ea7e48d99e/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Mar 17 18:18:43.219712 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [MessagingDeliveryService] Starting document processing engine... Mar 17 18:18:43.314330 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [MessagingDeliveryService] [EngineProcessor] Starting Mar 17 18:18:43.408675 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Mar 17 18:18:43.503297 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [MessageGatewayService] Starting session document processing engine... Mar 17 18:18:43.597948 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [MessageGatewayService] [EngineProcessor] Starting Mar 17 18:18:43.688296 tar[1813]: linux-arm64/LICENSE Mar 17 18:18:43.688875 tar[1813]: linux-arm64/README.md Mar 17 18:18:43.692935 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Mar 17 18:18:43.697871 systemd[1]: Finished prepare-helm.service. Mar 17 18:18:43.788597 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0a7f375ea7e48d99e, requestId: 03336ca3-5045-47c4-83a4-53c03f98cd78 Mar 17 18:18:43.846501 locksmithd[1870]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:18:43.883863 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [MessagingDeliveryService] Starting message polling Mar 17 18:18:43.979419 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [MessagingDeliveryService] Starting send replies to MDS Mar 17 18:18:44.075249 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [instanceID=i-0a7f375ea7e48d99e] Starting association polling Mar 17 18:18:44.171131 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Mar 17 18:18:44.267219 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [MessagingDeliveryService] [Association] Launching response handler Mar 17 18:18:44.363554 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Mar 17 18:18:44.460042 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Mar 17 18:18:44.557111 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Mar 17 18:18:44.559117 systemd[1]: Started kubelet.service. Mar 17 18:18:44.654028 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [HealthCheck] HealthCheck reporting agent health. Mar 17 18:18:44.751083 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [OfflineService] Starting document processing engine... Mar 17 18:18:44.848334 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [OfflineService] [EngineProcessor] Starting Mar 17 18:18:44.945880 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [OfflineService] [EngineProcessor] Initial processing Mar 17 18:18:45.043614 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [OfflineService] Starting message polling Mar 17 18:18:45.118880 sshd_keygen[1833]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:18:45.141511 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [OfflineService] Starting send replies to MDS Mar 17 18:18:45.157187 systemd[1]: Finished sshd-keygen.service. Mar 17 18:18:45.161756 systemd[1]: Starting issuegen.service... Mar 17 18:18:45.173727 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:18:45.174145 systemd[1]: Finished issuegen.service. Mar 17 18:18:45.179046 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:18:45.195529 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:18:45.200202 systemd[1]: Started getty@tty1.service. Mar 17 18:18:45.204625 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 18:18:45.206779 systemd[1]: Reached target getty.target. Mar 17 18:18:45.208691 systemd[1]: Reached target multi-user.target. Mar 17 18:18:45.213137 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:18:45.230285 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:18:45.230646 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:18:45.232809 systemd[1]: Startup finished in 1.156s (kernel) + 8.937s (initrd) + 12.703s (userspace) = 22.796s. Mar 17 18:18:45.239553 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [LongRunningPluginsManager] starting long running plugin manager Mar 17 18:18:45.337828 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Mar 17 18:18:45.436396 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [MessageGatewayService] listening reply. Mar 17 18:18:45.535110 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Mar 17 18:18:45.575416 kubelet[2006]: E0317 18:18:45.575358 2006 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:18:45.579096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:18:45.579407 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:18:45.579876 systemd[1]: kubelet.service: Consumed 1.416s CPU time. Mar 17 18:18:45.633912 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [StartupProcessor] Executing startup processor tasks Mar 17 18:18:45.732933 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Mar 17 18:18:45.832205 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Mar 17 18:18:45.931542 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.7 Mar 17 18:18:46.031158 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0a7f375ea7e48d99e?role=subscribe&stream=input Mar 17 18:18:46.131039 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0a7f375ea7e48d99e?role=subscribe&stream=input Mar 17 18:18:46.230973 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [MessageGatewayService] Starting receiving message from control channel Mar 17 18:18:46.331176 amazon-ssm-agent[1792]: 2025-03-17 18:18:43 INFO [MessageGatewayService] [EngineProcessor] Initial processing Mar 17 18:18:50.219615 systemd[1]: Created slice system-sshd.slice. Mar 17 18:18:50.222028 systemd[1]: Started sshd@0-172.31.23.140:22-139.178.89.65:52422.service. Mar 17 18:18:50.495938 sshd[2027]: Accepted publickey for core from 139.178.89.65 port 52422 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:18:50.501626 sshd[2027]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:18:50.521268 systemd[1]: Created slice user-500.slice. Mar 17 18:18:50.523980 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:18:50.532918 systemd-logind[1804]: New session 1 of user core. Mar 17 18:18:50.543587 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:18:50.546709 systemd[1]: Starting user@500.service... Mar 17 18:18:50.553861 (systemd)[2030]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:18:50.745360 systemd[2030]: Queued start job for default target default.target. Mar 17 18:18:50.746421 systemd[2030]: Reached target paths.target. Mar 17 18:18:50.746473 systemd[2030]: Reached target sockets.target. Mar 17 18:18:50.746505 systemd[2030]: Reached target timers.target. Mar 17 18:18:50.746535 systemd[2030]: Reached target basic.target. Mar 17 18:18:50.746629 systemd[2030]: Reached target default.target. Mar 17 18:18:50.746710 systemd[2030]: Startup finished in 180ms. Mar 17 18:18:50.746886 systemd[1]: Started user@500.service. Mar 17 18:18:50.748884 systemd[1]: Started session-1.scope. Mar 17 18:18:50.895282 systemd[1]: Started sshd@1-172.31.23.140:22-139.178.89.65:52428.service. Mar 17 18:18:51.073919 sshd[2039]: Accepted publickey for core from 139.178.89.65 port 52428 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:18:51.076516 sshd[2039]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:18:51.083651 systemd-logind[1804]: New session 2 of user core. Mar 17 18:18:51.085554 systemd[1]: Started session-2.scope. Mar 17 18:18:51.216389 sshd[2039]: pam_unix(sshd:session): session closed for user core Mar 17 18:18:51.222078 systemd-logind[1804]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:18:51.223609 systemd[1]: sshd@1-172.31.23.140:22-139.178.89.65:52428.service: Deactivated successfully. Mar 17 18:18:51.224985 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:18:51.226514 systemd-logind[1804]: Removed session 2. Mar 17 18:18:51.244538 systemd[1]: Started sshd@2-172.31.23.140:22-139.178.89.65:47972.service. Mar 17 18:18:51.420394 sshd[2045]: Accepted publickey for core from 139.178.89.65 port 47972 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:18:51.423352 sshd[2045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:18:51.432072 systemd-logind[1804]: New session 3 of user core. Mar 17 18:18:51.432397 systemd[1]: Started session-3.scope. Mar 17 18:18:51.555700 sshd[2045]: pam_unix(sshd:session): session closed for user core Mar 17 18:18:51.560395 systemd[1]: sshd@2-172.31.23.140:22-139.178.89.65:47972.service: Deactivated successfully. Mar 17 18:18:51.561641 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:18:51.562950 systemd-logind[1804]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:18:51.564556 systemd-logind[1804]: Removed session 3. Mar 17 18:18:51.585085 systemd[1]: Started sshd@3-172.31.23.140:22-139.178.89.65:47986.service. Mar 17 18:18:51.764876 sshd[2051]: Accepted publickey for core from 139.178.89.65 port 47986 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:18:51.767506 sshd[2051]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:18:51.774891 systemd-logind[1804]: New session 4 of user core. Mar 17 18:18:51.776348 systemd[1]: Started session-4.scope. Mar 17 18:18:51.910695 sshd[2051]: pam_unix(sshd:session): session closed for user core Mar 17 18:18:51.916093 systemd-logind[1804]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:18:51.916821 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:18:51.918123 systemd[1]: sshd@3-172.31.23.140:22-139.178.89.65:47986.service: Deactivated successfully. Mar 17 18:18:51.919509 systemd-logind[1804]: Removed session 4. Mar 17 18:18:51.937984 systemd[1]: Started sshd@4-172.31.23.140:22-139.178.89.65:47990.service. Mar 17 18:18:52.107676 sshd[2057]: Accepted publickey for core from 139.178.89.65 port 47990 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:18:52.111165 sshd[2057]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:18:52.120548 systemd-logind[1804]: New session 5 of user core. Mar 17 18:18:52.121514 systemd[1]: Started session-5.scope. Mar 17 18:18:52.266766 sudo[2060]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:18:52.267853 sudo[2060]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:18:52.319819 systemd[1]: Starting docker.service... Mar 17 18:18:52.391431 env[2070]: time="2025-03-17T18:18:52.391247399Z" level=info msg="Starting up" Mar 17 18:18:52.393971 env[2070]: time="2025-03-17T18:18:52.393926435Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:18:52.394157 env[2070]: time="2025-03-17T18:18:52.394128095Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:18:52.394285 env[2070]: time="2025-03-17T18:18:52.394251923Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:18:52.394408 env[2070]: time="2025-03-17T18:18:52.394381211Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:18:52.398076 env[2070]: time="2025-03-17T18:18:52.398031179Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:18:52.398245 env[2070]: time="2025-03-17T18:18:52.398217899Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:18:52.398376 env[2070]: time="2025-03-17T18:18:52.398343623Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:18:52.398483 env[2070]: time="2025-03-17T18:18:52.398456507Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:18:52.415364 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2830347533-merged.mount: Deactivated successfully. Mar 17 18:18:52.444323 env[2070]: time="2025-03-17T18:18:52.444275951Z" level=info msg="Loading containers: start." Mar 17 18:18:52.717003 kernel: Initializing XFRM netlink socket Mar 17 18:18:52.806366 env[2070]: time="2025-03-17T18:18:52.806304493Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:18:52.808571 (udev-worker)[2080]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:18:52.811286 systemd-timesyncd[1764]: Network configuration changed, trying to establish connection. Mar 17 18:18:52.918258 systemd-networkd[1534]: docker0: Link UP Mar 17 18:18:52.940188 env[2070]: time="2025-03-17T18:18:52.940118858Z" level=info msg="Loading containers: done." Mar 17 18:18:52.963091 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3960802322-merged.mount: Deactivated successfully. Mar 17 18:18:52.973548 env[2070]: time="2025-03-17T18:18:52.973472294Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:18:52.973990 env[2070]: time="2025-03-17T18:18:52.973939322Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:18:52.974256 env[2070]: time="2025-03-17T18:18:52.974211962Z" level=info msg="Daemon has completed initialization" Mar 17 18:18:53.004543 systemd[1]: Started docker.service. Mar 17 18:18:53.020502 env[2070]: time="2025-03-17T18:18:53.020405302Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:18:53.050773 systemd-timesyncd[1764]: Contacted time server 23.157.160.168:123 (2.flatcar.pool.ntp.org). Mar 17 18:18:53.050943 systemd-timesyncd[1764]: Initial clock synchronization to Mon 2025-03-17 18:18:52.967600 UTC. Mar 17 18:18:54.191225 env[1819]: time="2025-03-17T18:18:54.191142547Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 17 18:18:54.804781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1022259333.mount: Deactivated successfully. Mar 17 18:18:55.830784 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:18:55.831140 systemd[1]: Stopped kubelet.service. Mar 17 18:18:55.831213 systemd[1]: kubelet.service: Consumed 1.416s CPU time. Mar 17 18:18:55.833667 systemd[1]: Starting kubelet.service... Mar 17 18:18:56.128851 systemd[1]: Started kubelet.service. Mar 17 18:18:56.227518 kubelet[2197]: E0317 18:18:56.227439 2197 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:18:56.234173 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:18:56.234484 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:18:56.534464 env[1819]: time="2025-03-17T18:18:56.534304683Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:18:56.537323 env[1819]: time="2025-03-17T18:18:56.537257972Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:18:56.540836 env[1819]: time="2025-03-17T18:18:56.540750619Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:18:56.543970 env[1819]: time="2025-03-17T18:18:56.543910420Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:18:56.545740 env[1819]: time="2025-03-17T18:18:56.545694387Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\"" Mar 17 18:18:56.546724 env[1819]: time="2025-03-17T18:18:56.546662138Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 17 18:18:58.062819 amazon-ssm-agent[1792]: 2025-03-17 18:18:58 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Mar 17 18:18:58.374723 env[1819]: time="2025-03-17T18:18:58.374582684Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:18:58.379839 env[1819]: time="2025-03-17T18:18:58.379732910Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:18:58.382311 env[1819]: time="2025-03-17T18:18:58.382222942Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:18:58.385632 env[1819]: time="2025-03-17T18:18:58.385581850Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:18:58.387345 env[1819]: time="2025-03-17T18:18:58.387283095Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\"" Mar 17 18:18:58.388160 env[1819]: time="2025-03-17T18:18:58.388116310Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 17 18:18:59.947759 env[1819]: time="2025-03-17T18:18:59.947690759Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:18:59.950356 env[1819]: time="2025-03-17T18:18:59.950292566Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:18:59.953690 env[1819]: time="2025-03-17T18:18:59.953618781Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:18:59.957159 env[1819]: time="2025-03-17T18:18:59.957106923Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:18:59.958793 env[1819]: time="2025-03-17T18:18:59.958713071Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\"" Mar 17 18:18:59.959930 env[1819]: time="2025-03-17T18:18:59.959864974Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 18:19:01.268472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1227221795.mount: Deactivated successfully. Mar 17 18:19:02.130134 env[1819]: time="2025-03-17T18:19:02.130067002Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:02.132647 env[1819]: time="2025-03-17T18:19:02.132584389Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:02.135016 env[1819]: time="2025-03-17T18:19:02.134967902Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:02.137152 env[1819]: time="2025-03-17T18:19:02.137090113Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:02.138197 env[1819]: time="2025-03-17T18:19:02.138148546Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 17 18:19:02.139101 env[1819]: time="2025-03-17T18:19:02.139056987Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:19:02.636832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1852116402.mount: Deactivated successfully. Mar 17 18:19:03.101854 amazon-ssm-agent[1792]: 2025-03-17 18:19:03 INFO [HealthCheck] HealthCheck reporting agent health. Mar 17 18:19:03.928218 env[1819]: time="2025-03-17T18:19:03.928156438Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:03.930623 env[1819]: time="2025-03-17T18:19:03.930573542Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:03.934959 env[1819]: time="2025-03-17T18:19:03.934890978Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:03.938553 env[1819]: time="2025-03-17T18:19:03.938487141Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:03.940447 env[1819]: time="2025-03-17T18:19:03.940399315Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 18:19:03.941354 env[1819]: time="2025-03-17T18:19:03.941307325Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 18:19:04.441519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4076187387.mount: Deactivated successfully. Mar 17 18:19:04.481726 env[1819]: time="2025-03-17T18:19:04.481662185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:04.497044 env[1819]: time="2025-03-17T18:19:04.496965094Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:04.501659 env[1819]: time="2025-03-17T18:19:04.501591800Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:04.505908 env[1819]: time="2025-03-17T18:19:04.505841441Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:04.507280 env[1819]: time="2025-03-17T18:19:04.507227587Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 17 18:19:04.508279 env[1819]: time="2025-03-17T18:19:04.508232826Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 17 18:19:05.112283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1689620314.mount: Deactivated successfully. Mar 17 18:19:06.414728 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:19:06.415141 systemd[1]: Stopped kubelet.service. Mar 17 18:19:06.417597 systemd[1]: Starting kubelet.service... Mar 17 18:19:06.714341 systemd[1]: Started kubelet.service. Mar 17 18:19:06.806751 kubelet[2208]: E0317 18:19:06.806315 2208 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:19:06.810174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:19:06.810511 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:19:07.828012 env[1819]: time="2025-03-17T18:19:07.827940896Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:07.830956 env[1819]: time="2025-03-17T18:19:07.830894068Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:07.836031 env[1819]: time="2025-03-17T18:19:07.835977481Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:07.840172 env[1819]: time="2025-03-17T18:19:07.840104385Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:07.842220 env[1819]: time="2025-03-17T18:19:07.842155116Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Mar 17 18:19:12.529293 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 17 18:19:15.768821 systemd[1]: Stopped kubelet.service. Mar 17 18:19:15.773319 systemd[1]: Starting kubelet.service... Mar 17 18:19:15.828225 systemd[1]: Reloading. Mar 17 18:19:15.982732 /usr/lib/systemd/system-generators/torcx-generator[2259]: time="2025-03-17T18:19:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:19:15.990929 /usr/lib/systemd/system-generators/torcx-generator[2259]: time="2025-03-17T18:19:15Z" level=info msg="torcx already run" Mar 17 18:19:16.155406 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:19:16.156229 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:19:16.195572 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:19:16.413955 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 18:19:16.414153 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 18:19:16.415048 systemd[1]: Stopped kubelet.service. Mar 17 18:19:16.419338 systemd[1]: Starting kubelet.service... Mar 17 18:19:17.033460 systemd[1]: Started kubelet.service. Mar 17 18:19:17.122961 kubelet[2318]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:19:17.122961 kubelet[2318]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:19:17.122961 kubelet[2318]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:19:17.123588 kubelet[2318]: I0317 18:19:17.123120 2318 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:19:18.227246 kubelet[2318]: I0317 18:19:18.227193 2318 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 18:19:18.227981 kubelet[2318]: I0317 18:19:18.227916 2318 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:19:18.228675 kubelet[2318]: I0317 18:19:18.228624 2318 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 18:19:18.305929 kubelet[2318]: E0317 18:19:18.305864 2318 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.23.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.140:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:19:18.307928 kubelet[2318]: I0317 18:19:18.307884 2318 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:19:18.322166 kubelet[2318]: E0317 18:19:18.322099 2318 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:19:18.322166 kubelet[2318]: I0317 18:19:18.322156 2318 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:19:18.329668 kubelet[2318]: I0317 18:19:18.329581 2318 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:19:18.329925 kubelet[2318]: I0317 18:19:18.329896 2318 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 18:19:18.330240 kubelet[2318]: I0317 18:19:18.330190 2318 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:19:18.330535 kubelet[2318]: I0317 18:19:18.330243 2318 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-140","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:19:18.330712 kubelet[2318]: I0317 18:19:18.330573 2318 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:19:18.330712 kubelet[2318]: I0317 18:19:18.330594 2318 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 18:19:18.330879 kubelet[2318]: I0317 18:19:18.330777 2318 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:19:18.341452 kubelet[2318]: W0317 18:19:18.341381 2318 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-140&limit=500&resourceVersion=0": dial tcp 172.31.23.140:6443: connect: connection refused Mar 17 18:19:18.341748 kubelet[2318]: E0317 18:19:18.341711 2318 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-140&limit=500&resourceVersion=0\": dial tcp 172.31.23.140:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:19:18.342342 kubelet[2318]: I0317 18:19:18.342296 2318 kubelet.go:408] "Attempting to sync node with API server" Mar 17 18:19:18.342449 kubelet[2318]: I0317 18:19:18.342343 2318 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:19:18.342449 kubelet[2318]: I0317 18:19:18.342419 2318 kubelet.go:314] "Adding apiserver pod source" Mar 17 18:19:18.342449 kubelet[2318]: I0317 18:19:18.342442 2318 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:19:18.348485 kubelet[2318]: W0317 18:19:18.348398 2318 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.140:6443: connect: connection refused Mar 17 18:19:18.348758 kubelet[2318]: E0317 18:19:18.348724 2318 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.140:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:19:18.349552 kubelet[2318]: I0317 18:19:18.349506 2318 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:19:18.353473 kubelet[2318]: I0317 18:19:18.353429 2318 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:19:18.358164 kubelet[2318]: W0317 18:19:18.358126 2318 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:19:18.365634 kubelet[2318]: I0317 18:19:18.365592 2318 server.go:1269] "Started kubelet" Mar 17 18:19:18.368602 kubelet[2318]: I0317 18:19:18.368530 2318 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:19:18.370291 kubelet[2318]: I0317 18:19:18.370236 2318 server.go:460] "Adding debug handlers to kubelet server" Mar 17 18:19:18.374594 kubelet[2318]: I0317 18:19:18.374509 2318 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:19:18.375090 kubelet[2318]: I0317 18:19:18.375066 2318 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:19:18.377646 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:19:18.377821 kubelet[2318]: E0317 18:19:18.375471 2318 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.140:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.140:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-140.182daa02736ef226 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-140,UID:ip-172-31-23-140,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-140,},FirstTimestamp:2025-03-17 18:19:18.365557286 +0000 UTC m=+1.323969343,LastTimestamp:2025-03-17 18:19:18.365557286 +0000 UTC m=+1.323969343,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-140,}" Mar 17 18:19:18.378749 kubelet[2318]: I0317 18:19:18.378692 2318 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:19:18.380413 kubelet[2318]: I0317 18:19:18.380369 2318 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:19:18.387556 kubelet[2318]: E0317 18:19:18.387518 2318 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:19:18.388173 kubelet[2318]: E0317 18:19:18.388144 2318 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-140\" not found" Mar 17 18:19:18.388384 kubelet[2318]: I0317 18:19:18.388362 2318 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 18:19:18.388902 kubelet[2318]: I0317 18:19:18.388871 2318 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 18:19:18.389139 kubelet[2318]: I0317 18:19:18.389118 2318 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:19:18.391068 kubelet[2318]: W0317 18:19:18.391002 2318 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.140:6443: connect: connection refused Mar 17 18:19:18.391337 kubelet[2318]: E0317 18:19:18.391302 2318 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.140:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:19:18.393659 kubelet[2318]: E0317 18:19:18.393600 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-140?timeout=10s\": dial tcp 172.31.23.140:6443: connect: connection refused" interval="200ms" Mar 17 18:19:18.394208 kubelet[2318]: I0317 18:19:18.394175 2318 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:19:18.394394 kubelet[2318]: I0317 18:19:18.394370 2318 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:19:18.394636 kubelet[2318]: I0317 18:19:18.394605 2318 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:19:18.422641 kubelet[2318]: I0317 18:19:18.422598 2318 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:19:18.422641 kubelet[2318]: I0317 18:19:18.422635 2318 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:19:18.423002 kubelet[2318]: I0317 18:19:18.422668 2318 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:19:18.427272 kubelet[2318]: I0317 18:19:18.427223 2318 policy_none.go:49] "None policy: Start" Mar 17 18:19:18.428327 kubelet[2318]: I0317 18:19:18.428290 2318 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:19:18.428460 kubelet[2318]: I0317 18:19:18.428338 2318 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:19:18.443718 kubelet[2318]: I0317 18:19:18.443245 2318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:19:18.445692 systemd[1]: Created slice kubepods.slice. Mar 17 18:19:18.446481 kubelet[2318]: I0317 18:19:18.445701 2318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:19:18.446481 kubelet[2318]: I0317 18:19:18.445737 2318 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:19:18.446481 kubelet[2318]: I0317 18:19:18.445767 2318 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 18:19:18.446481 kubelet[2318]: E0317 18:19:18.445868 2318 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:19:18.449064 kubelet[2318]: W0317 18:19:18.449003 2318 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.140:6443: connect: connection refused Mar 17 18:19:18.449245 kubelet[2318]: E0317 18:19:18.449076 2318 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.140:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:19:18.460669 systemd[1]: Created slice kubepods-burstable.slice. Mar 17 18:19:18.467744 systemd[1]: Created slice kubepods-besteffort.slice. Mar 17 18:19:18.480868 kubelet[2318]: I0317 18:19:18.480692 2318 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:19:18.482086 kubelet[2318]: I0317 18:19:18.482048 2318 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:19:18.482398 kubelet[2318]: I0317 18:19:18.482298 2318 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:19:18.483097 kubelet[2318]: I0317 18:19:18.483057 2318 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:19:18.485745 kubelet[2318]: E0317 18:19:18.485700 2318 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-140\" not found" Mar 17 18:19:18.565430 systemd[1]: Created slice kubepods-burstable-pod484492554b6c425efdef7c50b8b3ed10.slice. Mar 17 18:19:18.586421 systemd[1]: Created slice kubepods-burstable-podc24625f5dc3db7b1fcd5e83237e23b72.slice. Mar 17 18:19:18.594675 kubelet[2318]: E0317 18:19:18.594471 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-140?timeout=10s\": dial tcp 172.31.23.140:6443: connect: connection refused" interval="400ms" Mar 17 18:19:18.599178 kubelet[2318]: I0317 18:19:18.599112 2318 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-140" Mar 17 18:19:18.600331 kubelet[2318]: E0317 18:19:18.600223 2318 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.140:6443/api/v1/nodes\": dial tcp 172.31.23.140:6443: connect: connection refused" node="ip-172-31-23-140" Mar 17 18:19:18.604229 systemd[1]: Created slice kubepods-burstable-pod0444b53b2338196e204ff4d0aa170d5c.slice. Mar 17 18:19:18.693598 kubelet[2318]: I0317 18:19:18.693513 2318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c24625f5dc3db7b1fcd5e83237e23b72-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-140\" (UID: \"c24625f5dc3db7b1fcd5e83237e23b72\") " pod="kube-system/kube-controller-manager-ip-172-31-23-140" Mar 17 18:19:18.693833 kubelet[2318]: I0317 18:19:18.693609 2318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c24625f5dc3db7b1fcd5e83237e23b72-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-140\" (UID: \"c24625f5dc3db7b1fcd5e83237e23b72\") " pod="kube-system/kube-controller-manager-ip-172-31-23-140" Mar 17 18:19:18.693833 kubelet[2318]: I0317 18:19:18.693682 2318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/484492554b6c425efdef7c50b8b3ed10-ca-certs\") pod \"kube-apiserver-ip-172-31-23-140\" (UID: \"484492554b6c425efdef7c50b8b3ed10\") " pod="kube-system/kube-apiserver-ip-172-31-23-140" Mar 17 18:19:18.693833 kubelet[2318]: I0317 18:19:18.693744 2318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c24625f5dc3db7b1fcd5e83237e23b72-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-140\" (UID: \"c24625f5dc3db7b1fcd5e83237e23b72\") " pod="kube-system/kube-controller-manager-ip-172-31-23-140" Mar 17 18:19:18.693833 kubelet[2318]: I0317 18:19:18.693786 2318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c24625f5dc3db7b1fcd5e83237e23b72-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-140\" (UID: \"c24625f5dc3db7b1fcd5e83237e23b72\") " pod="kube-system/kube-controller-manager-ip-172-31-23-140" Mar 17 18:19:18.694148 kubelet[2318]: I0317 18:19:18.693877 2318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c24625f5dc3db7b1fcd5e83237e23b72-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-140\" (UID: \"c24625f5dc3db7b1fcd5e83237e23b72\") " pod="kube-system/kube-controller-manager-ip-172-31-23-140" Mar 17 18:19:18.694148 kubelet[2318]: I0317 18:19:18.693953 2318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0444b53b2338196e204ff4d0aa170d5c-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-140\" (UID: \"0444b53b2338196e204ff4d0aa170d5c\") " pod="kube-system/kube-scheduler-ip-172-31-23-140" Mar 17 18:19:18.694148 kubelet[2318]: I0317 18:19:18.694019 2318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/484492554b6c425efdef7c50b8b3ed10-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-140\" (UID: \"484492554b6c425efdef7c50b8b3ed10\") " pod="kube-system/kube-apiserver-ip-172-31-23-140" Mar 17 18:19:18.694148 kubelet[2318]: I0317 18:19:18.694063 2318 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/484492554b6c425efdef7c50b8b3ed10-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-140\" (UID: \"484492554b6c425efdef7c50b8b3ed10\") " pod="kube-system/kube-apiserver-ip-172-31-23-140" Mar 17 18:19:18.804689 kubelet[2318]: I0317 18:19:18.803550 2318 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-140" Mar 17 18:19:18.804868 kubelet[2318]: E0317 18:19:18.804679 2318 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.140:6443/api/v1/nodes\": dial tcp 172.31.23.140:6443: connect: connection refused" node="ip-172-31-23-140" Mar 17 18:19:18.883309 env[1819]: time="2025-03-17T18:19:18.883219823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-140,Uid:484492554b6c425efdef7c50b8b3ed10,Namespace:kube-system,Attempt:0,}" Mar 17 18:19:18.902781 env[1819]: time="2025-03-17T18:19:18.902715006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-140,Uid:c24625f5dc3db7b1fcd5e83237e23b72,Namespace:kube-system,Attempt:0,}" Mar 17 18:19:18.910610 env[1819]: time="2025-03-17T18:19:18.910518288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-140,Uid:0444b53b2338196e204ff4d0aa170d5c,Namespace:kube-system,Attempt:0,}" Mar 17 18:19:18.997158 kubelet[2318]: E0317 18:19:18.997076 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-140?timeout=10s\": dial tcp 172.31.23.140:6443: connect: connection refused" interval="800ms" Mar 17 18:19:19.207458 kubelet[2318]: I0317 18:19:19.207102 2318 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-140" Mar 17 18:19:19.207610 kubelet[2318]: E0317 18:19:19.207551 2318 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.140:6443/api/v1/nodes\": dial tcp 172.31.23.140:6443: connect: connection refused" node="ip-172-31-23-140" Mar 17 18:19:19.313037 kubelet[2318]: W0317 18:19:19.312972 2318 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.140:6443: connect: connection refused Mar 17 18:19:19.313597 kubelet[2318]: E0317 18:19:19.313046 2318 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.140:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:19:19.455918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1573275519.mount: Deactivated successfully. Mar 17 18:19:19.469103 env[1819]: time="2025-03-17T18:19:19.468963131Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:19.476389 env[1819]: time="2025-03-17T18:19:19.476335234Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:19.480904 env[1819]: time="2025-03-17T18:19:19.480848705Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:19.484574 env[1819]: time="2025-03-17T18:19:19.484509721Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:19.487003 env[1819]: time="2025-03-17T18:19:19.486960995Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:19.491679 env[1819]: time="2025-03-17T18:19:19.491621052Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:19.495284 env[1819]: time="2025-03-17T18:19:19.495237997Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:19.500473 kubelet[2318]: W0317 18:19:19.500324 2318 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.140:6443: connect: connection refused Mar 17 18:19:19.500473 kubelet[2318]: E0317 18:19:19.500419 2318 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.140:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:19:19.501704 env[1819]: time="2025-03-17T18:19:19.501655143Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:19.504718 env[1819]: time="2025-03-17T18:19:19.504670593Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:19.506688 env[1819]: time="2025-03-17T18:19:19.506597576Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:19.515048 env[1819]: time="2025-03-17T18:19:19.514994473Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:19.518748 env[1819]: time="2025-03-17T18:19:19.518665518Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:19.557971 env[1819]: time="2025-03-17T18:19:19.557742360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:19:19.558272 env[1819]: time="2025-03-17T18:19:19.557888886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:19:19.558272 env[1819]: time="2025-03-17T18:19:19.557960692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:19:19.558691 env[1819]: time="2025-03-17T18:19:19.558566481Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8cd4a578b25263dc8c783fd01e7e0582d06f713a3da84f7d0ca2a39cbe97935 pid=2357 runtime=io.containerd.runc.v2 Mar 17 18:19:19.606448 systemd[1]: Started cri-containerd-a8cd4a578b25263dc8c783fd01e7e0582d06f713a3da84f7d0ca2a39cbe97935.scope. Mar 17 18:19:19.609894 env[1819]: time="2025-03-17T18:19:19.609752122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:19:19.616647 env[1819]: time="2025-03-17T18:19:19.616574359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:19:19.616976 env[1819]: time="2025-03-17T18:19:19.616920851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:19:19.617560 env[1819]: time="2025-03-17T18:19:19.617491038Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/65a2bad6efe8b8173009cac813f98b8c7f5c79d42f79e2d886405948cc035d43 pid=2384 runtime=io.containerd.runc.v2 Mar 17 18:19:19.647507 kubelet[2318]: W0317 18:19:19.647409 2318 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-140&limit=500&resourceVersion=0": dial tcp 172.31.23.140:6443: connect: connection refused Mar 17 18:19:19.647666 kubelet[2318]: E0317 18:19:19.647516 2318 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-140&limit=500&resourceVersion=0\": dial tcp 172.31.23.140:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:19:19.647967 env[1819]: time="2025-03-17T18:19:19.647834386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:19:19.647967 env[1819]: time="2025-03-17T18:19:19.647917971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:19:19.648367 env[1819]: time="2025-03-17T18:19:19.648286919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:19:19.649289 env[1819]: time="2025-03-17T18:19:19.649186684Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de162a6376c745db837221a3b18593df0b0fe5dcae7e5de639f23eb8bf2d7744 pid=2405 runtime=io.containerd.runc.v2 Mar 17 18:19:19.658841 systemd[1]: Started cri-containerd-65a2bad6efe8b8173009cac813f98b8c7f5c79d42f79e2d886405948cc035d43.scope. Mar 17 18:19:19.690655 systemd[1]: Started cri-containerd-de162a6376c745db837221a3b18593df0b0fe5dcae7e5de639f23eb8bf2d7744.scope. Mar 17 18:19:19.790078 env[1819]: time="2025-03-17T18:19:19.789787800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-140,Uid:c24625f5dc3db7b1fcd5e83237e23b72,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8cd4a578b25263dc8c783fd01e7e0582d06f713a3da84f7d0ca2a39cbe97935\"" Mar 17 18:19:19.797660 kubelet[2318]: E0317 18:19:19.797595 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-140?timeout=10s\": dial tcp 172.31.23.140:6443: connect: connection refused" interval="1.6s" Mar 17 18:19:19.799357 env[1819]: time="2025-03-17T18:19:19.799303586Z" level=info msg="CreateContainer within sandbox \"a8cd4a578b25263dc8c783fd01e7e0582d06f713a3da84f7d0ca2a39cbe97935\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:19:19.830921 env[1819]: time="2025-03-17T18:19:19.830857168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-140,Uid:484492554b6c425efdef7c50b8b3ed10,Namespace:kube-system,Attempt:0,} returns sandbox id \"65a2bad6efe8b8173009cac813f98b8c7f5c79d42f79e2d886405948cc035d43\"" Mar 17 18:19:19.843332 env[1819]: time="2025-03-17T18:19:19.843218211Z" level=info msg="CreateContainer within sandbox \"65a2bad6efe8b8173009cac813f98b8c7f5c79d42f79e2d886405948cc035d43\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:19:19.847638 env[1819]: time="2025-03-17T18:19:19.847562496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-140,Uid:0444b53b2338196e204ff4d0aa170d5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"de162a6376c745db837221a3b18593df0b0fe5dcae7e5de639f23eb8bf2d7744\"" Mar 17 18:19:19.853830 env[1819]: time="2025-03-17T18:19:19.853738135Z" level=info msg="CreateContainer within sandbox \"de162a6376c745db837221a3b18593df0b0fe5dcae7e5de639f23eb8bf2d7744\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:19:19.859823 env[1819]: time="2025-03-17T18:19:19.859719170Z" level=info msg="CreateContainer within sandbox \"a8cd4a578b25263dc8c783fd01e7e0582d06f713a3da84f7d0ca2a39cbe97935\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f55ffd714f460eb0049b7b07baf2c81fdd9a875f5eecf8b4230ece918b99f2ae\"" Mar 17 18:19:19.860960 env[1819]: time="2025-03-17T18:19:19.860896740Z" level=info msg="StartContainer for \"f55ffd714f460eb0049b7b07baf2c81fdd9a875f5eecf8b4230ece918b99f2ae\"" Mar 17 18:19:19.893107 env[1819]: time="2025-03-17T18:19:19.893021757Z" level=info msg="CreateContainer within sandbox \"65a2bad6efe8b8173009cac813f98b8c7f5c79d42f79e2d886405948cc035d43\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9497b6b64e7a259c98c52d1020816f723808bf3140b09b90630984878303be7b\"" Mar 17 18:19:19.894152 env[1819]: time="2025-03-17T18:19:19.894104095Z" level=info msg="StartContainer for \"9497b6b64e7a259c98c52d1020816f723808bf3140b09b90630984878303be7b\"" Mar 17 18:19:19.897288 env[1819]: time="2025-03-17T18:19:19.897227229Z" level=info msg="CreateContainer within sandbox \"de162a6376c745db837221a3b18593df0b0fe5dcae7e5de639f23eb8bf2d7744\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"86771b1f95b849da3e0eef0867f8b0c06bf23465905f0b1f6e9752be85758e62\"" Mar 17 18:19:19.898354 env[1819]: time="2025-03-17T18:19:19.898303725Z" level=info msg="StartContainer for \"86771b1f95b849da3e0eef0867f8b0c06bf23465905f0b1f6e9752be85758e62\"" Mar 17 18:19:19.901698 systemd[1]: Started cri-containerd-f55ffd714f460eb0049b7b07baf2c81fdd9a875f5eecf8b4230ece918b99f2ae.scope. Mar 17 18:19:19.941099 systemd[1]: Started cri-containerd-86771b1f95b849da3e0eef0867f8b0c06bf23465905f0b1f6e9752be85758e62.scope. Mar 17 18:19:19.985135 systemd[1]: Started cri-containerd-9497b6b64e7a259c98c52d1020816f723808bf3140b09b90630984878303be7b.scope. Mar 17 18:19:20.010546 kubelet[2318]: I0317 18:19:20.010488 2318 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-140" Mar 17 18:19:20.011059 kubelet[2318]: E0317 18:19:20.011007 2318 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.140:6443/api/v1/nodes\": dial tcp 172.31.23.140:6443: connect: connection refused" node="ip-172-31-23-140" Mar 17 18:19:20.020052 kubelet[2318]: W0317 18:19:20.019949 2318 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.140:6443: connect: connection refused Mar 17 18:19:20.020242 kubelet[2318]: E0317 18:19:20.020049 2318 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.140:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:19:20.047244 env[1819]: time="2025-03-17T18:19:20.047089665Z" level=info msg="StartContainer for \"f55ffd714f460eb0049b7b07baf2c81fdd9a875f5eecf8b4230ece918b99f2ae\" returns successfully" Mar 17 18:19:20.090352 env[1819]: time="2025-03-17T18:19:20.090289025Z" level=info msg="StartContainer for \"86771b1f95b849da3e0eef0867f8b0c06bf23465905f0b1f6e9752be85758e62\" returns successfully" Mar 17 18:19:20.131979 env[1819]: time="2025-03-17T18:19:20.131890572Z" level=info msg="StartContainer for \"9497b6b64e7a259c98c52d1020816f723808bf3140b09b90630984878303be7b\" returns successfully" Mar 17 18:19:21.613142 kubelet[2318]: I0317 18:19:21.613094 2318 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-140" Mar 17 18:19:23.350902 kubelet[2318]: I0317 18:19:23.350838 2318 apiserver.go:52] "Watching apiserver" Mar 17 18:19:23.378958 kubelet[2318]: E0317 18:19:23.378901 2318 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-140\" not found" node="ip-172-31-23-140" Mar 17 18:19:23.389771 kubelet[2318]: I0317 18:19:23.389705 2318 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 18:19:23.458965 kubelet[2318]: I0317 18:19:23.458911 2318 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-23-140" Mar 17 18:19:23.514604 kubelet[2318]: E0317 18:19:23.514445 2318 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-140.182daa02736ef226 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-140,UID:ip-172-31-23-140,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-140,},FirstTimestamp:2025-03-17 18:19:18.365557286 +0000 UTC m=+1.323969343,LastTimestamp:2025-03-17 18:19:18.365557286 +0000 UTC m=+1.323969343,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-140,}" Mar 17 18:19:25.541701 systemd[1]: Reloading. Mar 17 18:19:25.751779 /usr/lib/systemd/system-generators/torcx-generator[2616]: time="2025-03-17T18:19:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:19:25.751909 /usr/lib/systemd/system-generators/torcx-generator[2616]: time="2025-03-17T18:19:25Z" level=info msg="torcx already run" Mar 17 18:19:25.953460 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:19:25.953500 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:19:25.994515 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:19:26.245564 systemd[1]: Stopping kubelet.service... Mar 17 18:19:26.277767 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:19:26.278176 systemd[1]: Stopped kubelet.service. Mar 17 18:19:26.278259 systemd[1]: kubelet.service: Consumed 1.959s CPU time. Mar 17 18:19:26.282592 systemd[1]: Starting kubelet.service... Mar 17 18:19:26.577222 systemd[1]: Started kubelet.service. Mar 17 18:19:26.727512 kubelet[2675]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:19:26.727512 kubelet[2675]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:19:26.727512 kubelet[2675]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:19:26.728157 kubelet[2675]: I0317 18:19:26.727648 2675 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:19:26.733575 sudo[2686]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:19:26.734187 sudo[2686]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 18:19:26.742531 kubelet[2675]: I0317 18:19:26.742482 2675 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 18:19:26.744435 kubelet[2675]: I0317 18:19:26.744402 2675 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:19:26.745164 kubelet[2675]: I0317 18:19:26.745137 2675 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 18:19:26.748027 kubelet[2675]: I0317 18:19:26.747987 2675 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:19:26.757289 kubelet[2675]: I0317 18:19:26.757218 2675 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:19:26.773391 kubelet[2675]: E0317 18:19:26.773323 2675 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:19:26.773620 kubelet[2675]: I0317 18:19:26.773596 2675 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:19:26.787854 kubelet[2675]: I0317 18:19:26.787231 2675 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:19:26.787854 kubelet[2675]: I0317 18:19:26.787530 2675 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 18:19:26.787854 kubelet[2675]: I0317 18:19:26.787839 2675 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:19:26.788473 kubelet[2675]: I0317 18:19:26.787882 2675 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-140","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:19:26.788682 kubelet[2675]: I0317 18:19:26.788489 2675 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:19:26.788682 kubelet[2675]: I0317 18:19:26.788515 2675 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 18:19:26.788682 kubelet[2675]: I0317 18:19:26.788589 2675 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:19:26.788928 kubelet[2675]: I0317 18:19:26.788786 2675 kubelet.go:408] "Attempting to sync node with API server" Mar 17 18:19:26.788928 kubelet[2675]: I0317 18:19:26.788831 2675 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:19:26.788928 kubelet[2675]: I0317 18:19:26.788876 2675 kubelet.go:314] "Adding apiserver pod source" Mar 17 18:19:26.788928 kubelet[2675]: I0317 18:19:26.788911 2675 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:19:26.805673 kubelet[2675]: I0317 18:19:26.805104 2675 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:19:26.812319 kubelet[2675]: I0317 18:19:26.809915 2675 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:19:26.815845 kubelet[2675]: I0317 18:19:26.813302 2675 server.go:1269] "Started kubelet" Mar 17 18:19:26.818629 kubelet[2675]: I0317 18:19:26.818559 2675 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:19:26.820246 kubelet[2675]: I0317 18:19:26.820202 2675 server.go:460] "Adding debug handlers to kubelet server" Mar 17 18:19:26.823321 kubelet[2675]: I0317 18:19:26.822330 2675 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:19:26.823321 kubelet[2675]: I0317 18:19:26.822833 2675 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:19:26.837226 kubelet[2675]: I0317 18:19:26.837094 2675 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:19:26.854247 kubelet[2675]: I0317 18:19:26.854187 2675 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:19:26.860957 kubelet[2675]: I0317 18:19:26.860912 2675 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 18:19:26.861490 kubelet[2675]: E0317 18:19:26.861457 2675 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-140\" not found" Mar 17 18:19:26.862317 kubelet[2675]: I0317 18:19:26.862293 2675 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 18:19:26.862709 kubelet[2675]: I0317 18:19:26.862689 2675 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:19:26.893199 kubelet[2675]: I0317 18:19:26.891103 2675 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:19:26.893199 kubelet[2675]: I0317 18:19:26.891428 2675 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:19:26.910851 kubelet[2675]: I0317 18:19:26.908016 2675 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:19:26.911522 kubelet[2675]: I0317 18:19:26.911230 2675 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:19:26.911522 kubelet[2675]: I0317 18:19:26.911279 2675 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:19:26.911522 kubelet[2675]: I0317 18:19:26.911313 2675 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 18:19:26.911522 kubelet[2675]: E0317 18:19:26.911441 2675 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:19:26.922239 kubelet[2675]: I0317 18:19:26.921757 2675 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:19:27.015192 kubelet[2675]: E0317 18:19:27.014674 2675 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:19:27.039724 kubelet[2675]: I0317 18:19:27.039674 2675 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:19:27.039724 kubelet[2675]: I0317 18:19:27.039708 2675 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:19:27.039984 kubelet[2675]: I0317 18:19:27.039749 2675 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:19:27.040165 kubelet[2675]: I0317 18:19:27.040126 2675 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:19:27.040274 kubelet[2675]: I0317 18:19:27.040159 2675 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:19:27.040274 kubelet[2675]: I0317 18:19:27.040213 2675 policy_none.go:49] "None policy: Start" Mar 17 18:19:27.042566 kubelet[2675]: I0317 18:19:27.042520 2675 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:19:27.042703 kubelet[2675]: I0317 18:19:27.042577 2675 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:19:27.043025 kubelet[2675]: I0317 18:19:27.042968 2675 state_mem.go:75] "Updated machine memory state" Mar 17 18:19:27.058705 kubelet[2675]: I0317 18:19:27.058125 2675 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:19:27.058705 kubelet[2675]: I0317 18:19:27.058551 2675 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:19:27.060335 kubelet[2675]: I0317 18:19:27.060251 2675 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:19:27.074538 kubelet[2675]: I0317 18:19:27.073842 2675 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:19:27.190755 kubelet[2675]: I0317 18:19:27.190406 2675 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-140" Mar 17 18:19:27.203468 kubelet[2675]: I0317 18:19:27.203412 2675 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-23-140" Mar 17 18:19:27.203618 kubelet[2675]: I0317 18:19:27.203536 2675 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-23-140" Mar 17 18:19:27.234552 kubelet[2675]: E0317 18:19:27.234462 2675 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-23-140\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-23-140" Mar 17 18:19:27.266967 kubelet[2675]: I0317 18:19:27.266909 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/484492554b6c425efdef7c50b8b3ed10-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-140\" (UID: \"484492554b6c425efdef7c50b8b3ed10\") " pod="kube-system/kube-apiserver-ip-172-31-23-140" Mar 17 18:19:27.267160 kubelet[2675]: I0317 18:19:27.266975 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/484492554b6c425efdef7c50b8b3ed10-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-140\" (UID: \"484492554b6c425efdef7c50b8b3ed10\") " pod="kube-system/kube-apiserver-ip-172-31-23-140" Mar 17 18:19:27.267160 kubelet[2675]: I0317 18:19:27.267025 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c24625f5dc3db7b1fcd5e83237e23b72-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-140\" (UID: \"c24625f5dc3db7b1fcd5e83237e23b72\") " pod="kube-system/kube-controller-manager-ip-172-31-23-140" Mar 17 18:19:27.267160 kubelet[2675]: I0317 18:19:27.267068 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c24625f5dc3db7b1fcd5e83237e23b72-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-140\" (UID: \"c24625f5dc3db7b1fcd5e83237e23b72\") " pod="kube-system/kube-controller-manager-ip-172-31-23-140" Mar 17 18:19:27.267160 kubelet[2675]: I0317 18:19:27.267109 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0444b53b2338196e204ff4d0aa170d5c-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-140\" (UID: \"0444b53b2338196e204ff4d0aa170d5c\") " pod="kube-system/kube-scheduler-ip-172-31-23-140" Mar 17 18:19:27.267160 kubelet[2675]: I0317 18:19:27.267153 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/484492554b6c425efdef7c50b8b3ed10-ca-certs\") pod \"kube-apiserver-ip-172-31-23-140\" (UID: \"484492554b6c425efdef7c50b8b3ed10\") " pod="kube-system/kube-apiserver-ip-172-31-23-140" Mar 17 18:19:27.267464 kubelet[2675]: I0317 18:19:27.267188 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c24625f5dc3db7b1fcd5e83237e23b72-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-140\" (UID: \"c24625f5dc3db7b1fcd5e83237e23b72\") " pod="kube-system/kube-controller-manager-ip-172-31-23-140" Mar 17 18:19:27.267464 kubelet[2675]: I0317 18:19:27.267232 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c24625f5dc3db7b1fcd5e83237e23b72-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-140\" (UID: \"c24625f5dc3db7b1fcd5e83237e23b72\") " pod="kube-system/kube-controller-manager-ip-172-31-23-140" Mar 17 18:19:27.267464 kubelet[2675]: I0317 18:19:27.267272 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c24625f5dc3db7b1fcd5e83237e23b72-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-140\" (UID: \"c24625f5dc3db7b1fcd5e83237e23b72\") " pod="kube-system/kube-controller-manager-ip-172-31-23-140" Mar 17 18:19:27.436915 update_engine[1805]: I0317 18:19:27.436858 1805 update_attempter.cc:509] Updating boot flags... Mar 17 18:19:27.796693 kubelet[2675]: I0317 18:19:27.795982 2675 apiserver.go:52] "Watching apiserver" Mar 17 18:19:27.864212 kubelet[2675]: I0317 18:19:27.864077 2675 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 18:19:28.021162 sudo[2686]: pam_unix(sudo:session): session closed for user root Mar 17 18:19:28.082568 kubelet[2675]: I0317 18:19:28.082388 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-140" podStartSLOduration=1.082365108 podStartE2EDuration="1.082365108s" podCreationTimestamp="2025-03-17 18:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:19:28.05795028 +0000 UTC m=+1.465320620" watchObservedRunningTime="2025-03-17 18:19:28.082365108 +0000 UTC m=+1.489735448" Mar 17 18:19:28.112911 kubelet[2675]: I0317 18:19:28.107443 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-140" podStartSLOduration=3.107420299 podStartE2EDuration="3.107420299s" podCreationTimestamp="2025-03-17 18:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:19:28.083064316 +0000 UTC m=+1.490434632" watchObservedRunningTime="2025-03-17 18:19:28.107420299 +0000 UTC m=+1.514790627" Mar 17 18:19:28.113450 amazon-ssm-agent[1792]: 2025-03-17 18:19:28 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Mar 17 18:19:28.126488 kubelet[2675]: I0317 18:19:28.125403 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-140" podStartSLOduration=1.12537605 podStartE2EDuration="1.12537605s" podCreationTimestamp="2025-03-17 18:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:19:28.108035261 +0000 UTC m=+1.515405649" watchObservedRunningTime="2025-03-17 18:19:28.12537605 +0000 UTC m=+1.532746378" Mar 17 18:19:30.253291 kubelet[2675]: I0317 18:19:30.253234 2675 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:19:30.255145 env[1819]: time="2025-03-17T18:19:30.255079333Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:19:30.257096 kubelet[2675]: I0317 18:19:30.256409 2675 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:19:30.910675 systemd[1]: Created slice kubepods-besteffort-podf9d9f147_6554_4ac3_9465_0ab6071a120c.slice. Mar 17 18:19:30.977375 systemd[1]: Created slice kubepods-burstable-podd17bfb63_7179_4d49_87ad_94cb4fa59895.slice. Mar 17 18:19:31.002504 kubelet[2675]: I0317 18:19:31.002456 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9d9f147-6554-4ac3-9465-0ab6071a120c-xtables-lock\") pod \"kube-proxy-qqh9d\" (UID: \"f9d9f147-6554-4ac3-9465-0ab6071a120c\") " pod="kube-system/kube-proxy-qqh9d" Mar 17 18:19:31.002866 kubelet[2675]: I0317 18:19:31.002834 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-cilium-run\") pod \"cilium-5hz9h\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " pod="kube-system/cilium-5hz9h" Mar 17 18:19:31.003170 kubelet[2675]: I0317 18:19:31.003057 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzqnj\" (UniqueName: \"kubernetes.io/projected/d17bfb63-7179-4d49-87ad-94cb4fa59895-kube-api-access-fzqnj\") pod \"cilium-5hz9h\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " pod="kube-system/cilium-5hz9h" Mar 17 18:19:31.003403 kubelet[2675]: I0317 18:19:31.003332 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-bpf-maps\") pod \"cilium-5hz9h\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " pod="kube-system/cilium-5hz9h" Mar 17 18:19:31.003619 kubelet[2675]: I0317 18:19:31.003587 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj9kz\" (UniqueName: \"kubernetes.io/projected/f9d9f147-6554-4ac3-9465-0ab6071a120c-kube-api-access-pj9kz\") pod \"kube-proxy-qqh9d\" (UID: \"f9d9f147-6554-4ac3-9465-0ab6071a120c\") " pod="kube-system/kube-proxy-qqh9d" Mar 17 18:19:31.003892 kubelet[2675]: I0317 18:19:31.003863 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d17bfb63-7179-4d49-87ad-94cb4fa59895-cilium-config-path\") pod \"cilium-5hz9h\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " pod="kube-system/cilium-5hz9h" Mar 17 18:19:31.004090 kubelet[2675]: I0317 18:19:31.004046 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-etc-cni-netd\") pod \"cilium-5hz9h\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " pod="kube-system/cilium-5hz9h" Mar 17 18:19:31.004310 kubelet[2675]: I0317 18:19:31.004283 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9d9f147-6554-4ac3-9465-0ab6071a120c-lib-modules\") pod \"kube-proxy-qqh9d\" (UID: \"f9d9f147-6554-4ac3-9465-0ab6071a120c\") " pod="kube-system/kube-proxy-qqh9d" Mar 17 18:19:31.004529 kubelet[2675]: I0317 18:19:31.004490 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-hostproc\") pod \"cilium-5hz9h\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " pod="kube-system/cilium-5hz9h" Mar 17 18:19:31.004745 kubelet[2675]: I0317 18:19:31.004718 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-host-proc-sys-net\") pod \"cilium-5hz9h\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " pod="kube-system/cilium-5hz9h" Mar 17 18:19:31.005066 kubelet[2675]: I0317 18:19:31.004978 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-host-proc-sys-kernel\") pod \"cilium-5hz9h\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " pod="kube-system/cilium-5hz9h" Mar 17 18:19:31.005223 kubelet[2675]: I0317 18:19:31.005199 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d17bfb63-7179-4d49-87ad-94cb4fa59895-hubble-tls\") pod \"cilium-5hz9h\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " pod="kube-system/cilium-5hz9h" Mar 17 18:19:31.005451 kubelet[2675]: I0317 18:19:31.005424 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f9d9f147-6554-4ac3-9465-0ab6071a120c-kube-proxy\") pod \"kube-proxy-qqh9d\" (UID: \"f9d9f147-6554-4ac3-9465-0ab6071a120c\") " pod="kube-system/kube-proxy-qqh9d" Mar 17 18:19:31.005653 kubelet[2675]: I0317 18:19:31.005619 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-lib-modules\") pod \"cilium-5hz9h\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " pod="kube-system/cilium-5hz9h" Mar 17 18:19:31.005887 kubelet[2675]: I0317 18:19:31.005861 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d17bfb63-7179-4d49-87ad-94cb4fa59895-clustermesh-secrets\") pod \"cilium-5hz9h\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " pod="kube-system/cilium-5hz9h" Mar 17 18:19:31.006119 kubelet[2675]: I0317 18:19:31.006093 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-cilium-cgroup\") pod \"cilium-5hz9h\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " pod="kube-system/cilium-5hz9h" Mar 17 18:19:31.006338 kubelet[2675]: I0317 18:19:31.006310 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-cni-path\") pod \"cilium-5hz9h\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " pod="kube-system/cilium-5hz9h" Mar 17 18:19:31.006556 kubelet[2675]: I0317 18:19:31.006529 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-xtables-lock\") pod \"cilium-5hz9h\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " pod="kube-system/cilium-5hz9h" Mar 17 18:19:31.015750 kubelet[2675]: W0317 18:19:31.015693 2675 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-23-140" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-140' and this object Mar 17 18:19:31.016072 kubelet[2675]: E0317 18:19:31.016025 2675 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-23-140\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-23-140' and this object" logger="UnhandledError" Mar 17 18:19:31.016486 kubelet[2675]: W0317 18:19:31.016457 2675 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-23-140" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-140' and this object Mar 17 18:19:31.016723 kubelet[2675]: E0317 18:19:31.016675 2675 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-23-140\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-23-140' and this object" logger="UnhandledError" Mar 17 18:19:31.017337 kubelet[2675]: W0317 18:19:31.017298 2675 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-23-140" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-140' and this object Mar 17 18:19:31.017600 kubelet[2675]: E0317 18:19:31.017548 2675 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-23-140\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-23-140' and this object" logger="UnhandledError" Mar 17 18:19:31.179379 kubelet[2675]: E0317 18:19:31.179329 2675 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 17 18:19:31.179670 kubelet[2675]: E0317 18:19:31.179643 2675 projected.go:194] Error preparing data for projected volume kube-api-access-pj9kz for pod kube-system/kube-proxy-qqh9d: configmap "kube-root-ca.crt" not found Mar 17 18:19:31.179937 kubelet[2675]: E0317 18:19:31.179893 2675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9d9f147-6554-4ac3-9465-0ab6071a120c-kube-api-access-pj9kz podName:f9d9f147-6554-4ac3-9465-0ab6071a120c nodeName:}" failed. No retries permitted until 2025-03-17 18:19:31.679858806 +0000 UTC m=+5.087229134 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pj9kz" (UniqueName: "kubernetes.io/projected/f9d9f147-6554-4ac3-9465-0ab6071a120c-kube-api-access-pj9kz") pod "kube-proxy-qqh9d" (UID: "f9d9f147-6554-4ac3-9465-0ab6071a120c") : configmap "kube-root-ca.crt" not found Mar 17 18:19:31.181721 kubelet[2675]: E0317 18:19:31.181675 2675 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 17 18:19:31.181971 kubelet[2675]: E0317 18:19:31.181939 2675 projected.go:194] Error preparing data for projected volume kube-api-access-fzqnj for pod kube-system/cilium-5hz9h: configmap "kube-root-ca.crt" not found Mar 17 18:19:31.182203 kubelet[2675]: E0317 18:19:31.182175 2675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d17bfb63-7179-4d49-87ad-94cb4fa59895-kube-api-access-fzqnj podName:d17bfb63-7179-4d49-87ad-94cb4fa59895 nodeName:}" failed. No retries permitted until 2025-03-17 18:19:31.682139706 +0000 UTC m=+5.089510034 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fzqnj" (UniqueName: "kubernetes.io/projected/d17bfb63-7179-4d49-87ad-94cb4fa59895-kube-api-access-fzqnj") pod "cilium-5hz9h" (UID: "d17bfb63-7179-4d49-87ad-94cb4fa59895") : configmap "kube-root-ca.crt" not found Mar 17 18:19:31.273429 systemd[1]: Created slice kubepods-besteffort-pod500ada9e_79d4_43c8_8dc3_12df6c0f0dd4.slice. Mar 17 18:19:31.309746 kubelet[2675]: I0317 18:19:31.309676 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm54s\" (UniqueName: \"kubernetes.io/projected/500ada9e-79d4-43c8-8dc3-12df6c0f0dd4-kube-api-access-cm54s\") pod \"cilium-operator-5d85765b45-gsr2k\" (UID: \"500ada9e-79d4-43c8-8dc3-12df6c0f0dd4\") " pod="kube-system/cilium-operator-5d85765b45-gsr2k" Mar 17 18:19:31.310528 kubelet[2675]: I0317 18:19:31.310478 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/500ada9e-79d4-43c8-8dc3-12df6c0f0dd4-cilium-config-path\") pod \"cilium-operator-5d85765b45-gsr2k\" (UID: \"500ada9e-79d4-43c8-8dc3-12df6c0f0dd4\") " pod="kube-system/cilium-operator-5d85765b45-gsr2k" Mar 17 18:19:31.443134 kubelet[2675]: I0317 18:19:31.442983 2675 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 17 18:19:31.477752 kubelet[2675]: E0317 18:19:31.477686 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path clustermesh-secrets hubble-tls kube-api-access-fzqnj], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-5hz9h" podUID="d17bfb63-7179-4d49-87ad-94cb4fa59895" Mar 17 18:19:31.823094 env[1819]: time="2025-03-17T18:19:31.822779798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qqh9d,Uid:f9d9f147-6554-4ac3-9465-0ab6071a120c,Namespace:kube-system,Attempt:0,}" Mar 17 18:19:31.870635 env[1819]: time="2025-03-17T18:19:31.870489494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:19:31.870937 env[1819]: time="2025-03-17T18:19:31.870578781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:19:31.870937 env[1819]: time="2025-03-17T18:19:31.870609307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:19:31.871335 env[1819]: time="2025-03-17T18:19:31.871270281Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee8ce9229ed194c5ae37b932b76dfbdb9132651b1f8dcbdde85af80360c67c7d pid=2923 runtime=io.containerd.runc.v2 Mar 17 18:19:31.894429 systemd[1]: Started cri-containerd-ee8ce9229ed194c5ae37b932b76dfbdb9132651b1f8dcbdde85af80360c67c7d.scope. Mar 17 18:19:31.956965 env[1819]: time="2025-03-17T18:19:31.956909745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qqh9d,Uid:f9d9f147-6554-4ac3-9465-0ab6071a120c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee8ce9229ed194c5ae37b932b76dfbdb9132651b1f8dcbdde85af80360c67c7d\"" Mar 17 18:19:31.964260 env[1819]: time="2025-03-17T18:19:31.964198040Z" level=info msg="CreateContainer within sandbox \"ee8ce9229ed194c5ae37b932b76dfbdb9132651b1f8dcbdde85af80360c67c7d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:19:32.008910 env[1819]: time="2025-03-17T18:19:32.005207027Z" level=info msg="CreateContainer within sandbox \"ee8ce9229ed194c5ae37b932b76dfbdb9132651b1f8dcbdde85af80360c67c7d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9c796a544eac4818d3042e6b5102347b40f00e1ad81756c529e239411c033385\"" Mar 17 18:19:32.013830 env[1819]: time="2025-03-17T18:19:32.011076497Z" level=info msg="StartContainer for \"9c796a544eac4818d3042e6b5102347b40f00e1ad81756c529e239411c033385\"" Mar 17 18:19:32.016090 kubelet[2675]: I0317 18:19:32.016045 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-hostproc\") pod \"d17bfb63-7179-4d49-87ad-94cb4fa59895\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " Mar 17 18:19:32.016352 kubelet[2675]: I0317 18:19:32.016322 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-host-proc-sys-kernel\") pod \"d17bfb63-7179-4d49-87ad-94cb4fa59895\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " Mar 17 18:19:32.016513 kubelet[2675]: I0317 18:19:32.016488 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-lib-modules\") pod \"d17bfb63-7179-4d49-87ad-94cb4fa59895\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " Mar 17 18:19:32.016657 kubelet[2675]: I0317 18:19:32.016631 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-bpf-maps\") pod \"d17bfb63-7179-4d49-87ad-94cb4fa59895\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " Mar 17 18:19:32.016830 kubelet[2675]: I0317 18:19:32.016787 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-etc-cni-netd\") pod \"d17bfb63-7179-4d49-87ad-94cb4fa59895\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " Mar 17 18:19:32.016992 kubelet[2675]: I0317 18:19:32.016968 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-host-proc-sys-net\") pod \"d17bfb63-7179-4d49-87ad-94cb4fa59895\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " Mar 17 18:19:32.017133 kubelet[2675]: I0317 18:19:32.017108 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-cilium-cgroup\") pod \"d17bfb63-7179-4d49-87ad-94cb4fa59895\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " Mar 17 18:19:32.017302 kubelet[2675]: I0317 18:19:32.017274 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzqnj\" (UniqueName: \"kubernetes.io/projected/d17bfb63-7179-4d49-87ad-94cb4fa59895-kube-api-access-fzqnj\") pod \"d17bfb63-7179-4d49-87ad-94cb4fa59895\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " Mar 17 18:19:32.017459 kubelet[2675]: I0317 18:19:32.017434 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d17bfb63-7179-4d49-87ad-94cb4fa59895-hubble-tls\") pod \"d17bfb63-7179-4d49-87ad-94cb4fa59895\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " Mar 17 18:19:32.017634 kubelet[2675]: I0317 18:19:32.017607 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-xtables-lock\") pod \"d17bfb63-7179-4d49-87ad-94cb4fa59895\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " Mar 17 18:19:32.017774 kubelet[2675]: I0317 18:19:32.017749 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-cilium-run\") pod \"d17bfb63-7179-4d49-87ad-94cb4fa59895\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " Mar 17 18:19:32.018005 kubelet[2675]: I0317 18:19:32.017979 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-cni-path\") pod \"d17bfb63-7179-4d49-87ad-94cb4fa59895\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " Mar 17 18:19:32.018317 kubelet[2675]: I0317 18:19:32.017850 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d17bfb63-7179-4d49-87ad-94cb4fa59895" (UID: "d17bfb63-7179-4d49-87ad-94cb4fa59895"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:19:32.018577 kubelet[2675]: I0317 18:19:32.017883 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-hostproc" (OuterVolumeSpecName: "hostproc") pod "d17bfb63-7179-4d49-87ad-94cb4fa59895" (UID: "d17bfb63-7179-4d49-87ad-94cb4fa59895"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:19:32.018720 kubelet[2675]: I0317 18:19:32.017953 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d17bfb63-7179-4d49-87ad-94cb4fa59895" (UID: "d17bfb63-7179-4d49-87ad-94cb4fa59895"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:19:32.018872 kubelet[2675]: I0317 18:19:32.018098 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d17bfb63-7179-4d49-87ad-94cb4fa59895" (UID: "d17bfb63-7179-4d49-87ad-94cb4fa59895"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:19:32.018980 kubelet[2675]: I0317 18:19:32.018128 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d17bfb63-7179-4d49-87ad-94cb4fa59895" (UID: "d17bfb63-7179-4d49-87ad-94cb4fa59895"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:19:32.019132 kubelet[2675]: I0317 18:19:32.018237 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-cni-path" (OuterVolumeSpecName: "cni-path") pod "d17bfb63-7179-4d49-87ad-94cb4fa59895" (UID: "d17bfb63-7179-4d49-87ad-94cb4fa59895"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:19:32.019243 kubelet[2675]: I0317 18:19:32.018264 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d17bfb63-7179-4d49-87ad-94cb4fa59895" (UID: "d17bfb63-7179-4d49-87ad-94cb4fa59895"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:19:32.019359 kubelet[2675]: I0317 18:19:32.018288 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d17bfb63-7179-4d49-87ad-94cb4fa59895" (UID: "d17bfb63-7179-4d49-87ad-94cb4fa59895"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:19:32.021319 kubelet[2675]: I0317 18:19:32.021270 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d17bfb63-7179-4d49-87ad-94cb4fa59895" (UID: "d17bfb63-7179-4d49-87ad-94cb4fa59895"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:19:32.024641 kubelet[2675]: I0317 18:19:32.024590 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d17bfb63-7179-4d49-87ad-94cb4fa59895" (UID: "d17bfb63-7179-4d49-87ad-94cb4fa59895"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:19:32.034603 kubelet[2675]: I0317 18:19:32.034555 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d17bfb63-7179-4d49-87ad-94cb4fa59895-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d17bfb63-7179-4d49-87ad-94cb4fa59895" (UID: "d17bfb63-7179-4d49-87ad-94cb4fa59895"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:19:32.035153 kubelet[2675]: I0317 18:19:32.035050 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d17bfb63-7179-4d49-87ad-94cb4fa59895-kube-api-access-fzqnj" (OuterVolumeSpecName: "kube-api-access-fzqnj") pod "d17bfb63-7179-4d49-87ad-94cb4fa59895" (UID: "d17bfb63-7179-4d49-87ad-94cb4fa59895"). InnerVolumeSpecName "kube-api-access-fzqnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:19:32.062059 systemd[1]: Started cri-containerd-9c796a544eac4818d3042e6b5102347b40f00e1ad81756c529e239411c033385.scope. Mar 17 18:19:32.109774 kubelet[2675]: E0317 18:19:32.109182 2675 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:19:32.109774 kubelet[2675]: E0317 18:19:32.109310 2675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d17bfb63-7179-4d49-87ad-94cb4fa59895-cilium-config-path podName:d17bfb63-7179-4d49-87ad-94cb4fa59895 nodeName:}" failed. No retries permitted until 2025-03-17 18:19:32.609282143 +0000 UTC m=+6.016652471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/d17bfb63-7179-4d49-87ad-94cb4fa59895-cilium-config-path") pod "cilium-5hz9h" (UID: "d17bfb63-7179-4d49-87ad-94cb4fa59895") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:19:32.112371 kubelet[2675]: E0317 18:19:32.112287 2675 secret.go:188] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Mar 17 18:19:32.112541 kubelet[2675]: E0317 18:19:32.112433 2675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d17bfb63-7179-4d49-87ad-94cb4fa59895-clustermesh-secrets podName:d17bfb63-7179-4d49-87ad-94cb4fa59895 nodeName:}" failed. No retries permitted until 2025-03-17 18:19:32.612405143 +0000 UTC m=+6.019775471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/d17bfb63-7179-4d49-87ad-94cb4fa59895-clustermesh-secrets") pod "cilium-5hz9h" (UID: "d17bfb63-7179-4d49-87ad-94cb4fa59895") : failed to sync secret cache: timed out waiting for the condition Mar 17 18:19:32.118870 kubelet[2675]: I0317 18:19:32.118558 2675 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-lib-modules\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:19:32.118870 kubelet[2675]: I0317 18:19:32.118623 2675 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-hostproc\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:19:32.118870 kubelet[2675]: I0317 18:19:32.118647 2675 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-host-proc-sys-kernel\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:19:32.118870 kubelet[2675]: I0317 18:19:32.118696 2675 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-etc-cni-netd\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:19:32.118870 kubelet[2675]: I0317 18:19:32.118722 2675 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-bpf-maps\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:19:32.118870 kubelet[2675]: I0317 18:19:32.118746 2675 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-cilium-cgroup\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:19:32.118870 kubelet[2675]: I0317 18:19:32.118790 2675 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-host-proc-sys-net\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:19:32.119356 kubelet[2675]: I0317 18:19:32.118899 2675 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fzqnj\" (UniqueName: \"kubernetes.io/projected/d17bfb63-7179-4d49-87ad-94cb4fa59895-kube-api-access-fzqnj\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:19:32.119356 kubelet[2675]: I0317 18:19:32.118920 2675 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d17bfb63-7179-4d49-87ad-94cb4fa59895-hubble-tls\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:19:32.119356 kubelet[2675]: I0317 18:19:32.118967 2675 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-xtables-lock\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:19:32.119356 kubelet[2675]: I0317 18:19:32.118992 2675 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-cilium-run\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:19:32.119356 kubelet[2675]: I0317 18:19:32.119012 2675 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d17bfb63-7179-4d49-87ad-94cb4fa59895-cni-path\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:19:32.140620 env[1819]: time="2025-03-17T18:19:32.140546484Z" level=info msg="StartContainer for \"9c796a544eac4818d3042e6b5102347b40f00e1ad81756c529e239411c033385\" returns successfully" Mar 17 18:19:32.453553 systemd[1]: var-lib-kubelet-pods-d17bfb63\x2d7179\x2d4d49\x2d87ad\x2d94cb4fa59895-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:19:32.455040 systemd[1]: var-lib-kubelet-pods-d17bfb63\x2d7179\x2d4d49\x2d87ad\x2d94cb4fa59895-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfzqnj.mount: Deactivated successfully. Mar 17 18:19:32.478985 env[1819]: time="2025-03-17T18:19:32.478928293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gsr2k,Uid:500ada9e-79d4-43c8-8dc3-12df6c0f0dd4,Namespace:kube-system,Attempt:0,}" Mar 17 18:19:32.525474 env[1819]: time="2025-03-17T18:19:32.525307858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:19:32.525474 env[1819]: time="2025-03-17T18:19:32.525388266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:19:32.525474 env[1819]: time="2025-03-17T18:19:32.525415828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:19:32.526268 env[1819]: time="2025-03-17T18:19:32.526152099Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/839ed5631ef7b2509752c497c88f8006b6a2d28f4d8d400f31806daf98b98e18 pid=3048 runtime=io.containerd.runc.v2 Mar 17 18:19:32.578581 systemd[1]: run-containerd-runc-k8s.io-839ed5631ef7b2509752c497c88f8006b6a2d28f4d8d400f31806daf98b98e18-runc.gLwUmu.mount: Deactivated successfully. Mar 17 18:19:32.584679 systemd[1]: Started cri-containerd-839ed5631ef7b2509752c497c88f8006b6a2d28f4d8d400f31806daf98b98e18.scope. Mar 17 18:19:32.707380 env[1819]: time="2025-03-17T18:19:32.707236725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gsr2k,Uid:500ada9e-79d4-43c8-8dc3-12df6c0f0dd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"839ed5631ef7b2509752c497c88f8006b6a2d28f4d8d400f31806daf98b98e18\"" Mar 17 18:19:32.711650 env[1819]: time="2025-03-17T18:19:32.711559017Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:19:32.828237 kubelet[2675]: I0317 18:19:32.828171 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d17bfb63-7179-4d49-87ad-94cb4fa59895-clustermesh-secrets\") pod \"d17bfb63-7179-4d49-87ad-94cb4fa59895\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " Mar 17 18:19:32.828788 kubelet[2675]: I0317 18:19:32.828283 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d17bfb63-7179-4d49-87ad-94cb4fa59895-cilium-config-path\") pod \"d17bfb63-7179-4d49-87ad-94cb4fa59895\" (UID: \"d17bfb63-7179-4d49-87ad-94cb4fa59895\") " Mar 17 18:19:32.835559 kubelet[2675]: I0317 18:19:32.835499 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d17bfb63-7179-4d49-87ad-94cb4fa59895-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d17bfb63-7179-4d49-87ad-94cb4fa59895" (UID: "d17bfb63-7179-4d49-87ad-94cb4fa59895"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:19:32.836090 kubelet[2675]: I0317 18:19:32.836043 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d17bfb63-7179-4d49-87ad-94cb4fa59895-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d17bfb63-7179-4d49-87ad-94cb4fa59895" (UID: "d17bfb63-7179-4d49-87ad-94cb4fa59895"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:19:32.924451 systemd[1]: Removed slice kubepods-burstable-podd17bfb63_7179_4d49_87ad_94cb4fa59895.slice. Mar 17 18:19:32.929416 kubelet[2675]: I0317 18:19:32.929379 2675 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d17bfb63-7179-4d49-87ad-94cb4fa59895-cilium-config-path\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:19:32.929589 kubelet[2675]: I0317 18:19:32.929566 2675 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d17bfb63-7179-4d49-87ad-94cb4fa59895-clustermesh-secrets\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:19:33.042079 kubelet[2675]: I0317 18:19:33.041857 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qqh9d" podStartSLOduration=3.04183408 podStartE2EDuration="3.04183408s" podCreationTimestamp="2025-03-17 18:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:19:33.019540059 +0000 UTC m=+6.426910399" watchObservedRunningTime="2025-03-17 18:19:33.04183408 +0000 UTC m=+6.449204419" Mar 17 18:19:33.110389 systemd[1]: Created slice kubepods-burstable-pod9f84c6b6_1fd1_4d1b_bb3e_c2c3bd640aec.slice. Mar 17 18:19:33.231451 kubelet[2675]: I0317 18:19:33.231404 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-bpf-maps\") pod \"cilium-fr9kf\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " pod="kube-system/cilium-fr9kf" Mar 17 18:19:33.231696 kubelet[2675]: I0317 18:19:33.231668 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-hostproc\") pod \"cilium-fr9kf\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " pod="kube-system/cilium-fr9kf" Mar 17 18:19:33.231898 kubelet[2675]: I0317 18:19:33.231869 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-host-proc-sys-net\") pod \"cilium-fr9kf\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " pod="kube-system/cilium-fr9kf" Mar 17 18:19:33.232050 kubelet[2675]: I0317 18:19:33.232024 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t6jl\" (UniqueName: \"kubernetes.io/projected/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-kube-api-access-2t6jl\") pod \"cilium-fr9kf\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " pod="kube-system/cilium-fr9kf" Mar 17 18:19:33.232195 kubelet[2675]: I0317 18:19:33.232170 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-xtables-lock\") pod \"cilium-fr9kf\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " pod="kube-system/cilium-fr9kf" Mar 17 18:19:33.232383 kubelet[2675]: I0317 18:19:33.232356 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-clustermesh-secrets\") pod \"cilium-fr9kf\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " pod="kube-system/cilium-fr9kf" Mar 17 18:19:33.232533 kubelet[2675]: I0317 18:19:33.232508 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-lib-modules\") pod \"cilium-fr9kf\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " pod="kube-system/cilium-fr9kf" Mar 17 18:19:33.232682 kubelet[2675]: I0317 18:19:33.232655 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-hubble-tls\") pod \"cilium-fr9kf\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " pod="kube-system/cilium-fr9kf" Mar 17 18:19:33.232865 kubelet[2675]: I0317 18:19:33.232840 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-cilium-config-path\") pod \"cilium-fr9kf\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " pod="kube-system/cilium-fr9kf" Mar 17 18:19:33.233008 kubelet[2675]: I0317 18:19:33.232981 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-host-proc-sys-kernel\") pod \"cilium-fr9kf\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " pod="kube-system/cilium-fr9kf" Mar 17 18:19:33.233172 kubelet[2675]: I0317 18:19:33.233147 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-cilium-cgroup\") pod \"cilium-fr9kf\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " pod="kube-system/cilium-fr9kf" Mar 17 18:19:33.233315 kubelet[2675]: I0317 18:19:33.233287 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-cni-path\") pod \"cilium-fr9kf\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " pod="kube-system/cilium-fr9kf" Mar 17 18:19:33.233454 kubelet[2675]: I0317 18:19:33.233429 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-etc-cni-netd\") pod \"cilium-fr9kf\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " pod="kube-system/cilium-fr9kf" Mar 17 18:19:33.233596 kubelet[2675]: I0317 18:19:33.233570 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-cilium-run\") pod \"cilium-fr9kf\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " pod="kube-system/cilium-fr9kf" Mar 17 18:19:33.416088 env[1819]: time="2025-03-17T18:19:33.416027237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fr9kf,Uid:9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec,Namespace:kube-system,Attempt:0,}" Mar 17 18:19:33.446154 env[1819]: time="2025-03-17T18:19:33.445976943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:19:33.446390 env[1819]: time="2025-03-17T18:19:33.446179505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:19:33.446390 env[1819]: time="2025-03-17T18:19:33.446270425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:19:33.446779 env[1819]: time="2025-03-17T18:19:33.446645539Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685 pid=3175 runtime=io.containerd.runc.v2 Mar 17 18:19:33.497245 systemd[1]: Started cri-containerd-4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685.scope. Mar 17 18:19:33.562652 env[1819]: time="2025-03-17T18:19:33.562583904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fr9kf,Uid:9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec,Namespace:kube-system,Attempt:0,} returns sandbox id \"4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685\"" Mar 17 18:19:34.927829 kubelet[2675]: I0317 18:19:34.927737 2675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d17bfb63-7179-4d49-87ad-94cb4fa59895" path="/var/lib/kubelet/pods/d17bfb63-7179-4d49-87ad-94cb4fa59895/volumes" Mar 17 18:19:37.685700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount503304706.mount: Deactivated successfully. Mar 17 18:19:38.673344 env[1819]: time="2025-03-17T18:19:38.673279991Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:38.675437 env[1819]: time="2025-03-17T18:19:38.675388194Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:38.678009 env[1819]: time="2025-03-17T18:19:38.677941497Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:38.679533 env[1819]: time="2025-03-17T18:19:38.679452686Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 18:19:38.685014 env[1819]: time="2025-03-17T18:19:38.684939474Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:19:38.687539 env[1819]: time="2025-03-17T18:19:38.687465526Z" level=info msg="CreateContainer within sandbox \"839ed5631ef7b2509752c497c88f8006b6a2d28f4d8d400f31806daf98b98e18\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:19:38.706783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3067736743.mount: Deactivated successfully. Mar 17 18:19:38.724709 env[1819]: time="2025-03-17T18:19:38.724637723Z" level=info msg="CreateContainer within sandbox \"839ed5631ef7b2509752c497c88f8006b6a2d28f4d8d400f31806daf98b98e18\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"721482b9c09c0b71d6c25b4956cdb4a84293aabdccfe95720ec1da2f7d5b11c0\"" Mar 17 18:19:38.727590 env[1819]: time="2025-03-17T18:19:38.726446033Z" level=info msg="StartContainer for \"721482b9c09c0b71d6c25b4956cdb4a84293aabdccfe95720ec1da2f7d5b11c0\"" Mar 17 18:19:38.768431 systemd[1]: Started cri-containerd-721482b9c09c0b71d6c25b4956cdb4a84293aabdccfe95720ec1da2f7d5b11c0.scope. Mar 17 18:19:38.841237 env[1819]: time="2025-03-17T18:19:38.841174141Z" level=info msg="StartContainer for \"721482b9c09c0b71d6c25b4956cdb4a84293aabdccfe95720ec1da2f7d5b11c0\" returns successfully" Mar 17 18:19:45.652223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2336069959.mount: Deactivated successfully. Mar 17 18:19:49.704866 env[1819]: time="2025-03-17T18:19:49.704773344Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:49.709049 env[1819]: time="2025-03-17T18:19:49.708984916Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:49.712231 env[1819]: time="2025-03-17T18:19:49.712177950Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:49.713560 env[1819]: time="2025-03-17T18:19:49.713514865Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 18:19:49.721246 env[1819]: time="2025-03-17T18:19:49.721186753Z" level=info msg="CreateContainer within sandbox \"4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:19:49.746616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3427326615.mount: Deactivated successfully. Mar 17 18:19:49.758283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4258477117.mount: Deactivated successfully. Mar 17 18:19:49.767059 env[1819]: time="2025-03-17T18:19:49.766974092Z" level=info msg="CreateContainer within sandbox \"4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ff21e5e8ccf23d681a61d92b9f497dcbab3b79280f0ec870522623c5db1b1daf\"" Mar 17 18:19:49.768078 env[1819]: time="2025-03-17T18:19:49.768031821Z" level=info msg="StartContainer for \"ff21e5e8ccf23d681a61d92b9f497dcbab3b79280f0ec870522623c5db1b1daf\"" Mar 17 18:19:49.807522 systemd[1]: Started cri-containerd-ff21e5e8ccf23d681a61d92b9f497dcbab3b79280f0ec870522623c5db1b1daf.scope. Mar 17 18:19:49.887761 env[1819]: time="2025-03-17T18:19:49.887658177Z" level=info msg="StartContainer for \"ff21e5e8ccf23d681a61d92b9f497dcbab3b79280f0ec870522623c5db1b1daf\" returns successfully" Mar 17 18:19:49.904897 systemd[1]: cri-containerd-ff21e5e8ccf23d681a61d92b9f497dcbab3b79280f0ec870522623c5db1b1daf.scope: Deactivated successfully. Mar 17 18:19:50.095985 kubelet[2675]: I0317 18:19:50.095751 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-gsr2k" podStartSLOduration=13.124636353 podStartE2EDuration="19.095730843s" podCreationTimestamp="2025-03-17 18:19:31 +0000 UTC" firstStartedPulling="2025-03-17 18:19:32.710528125 +0000 UTC m=+6.117898453" lastFinishedPulling="2025-03-17 18:19:38.681622627 +0000 UTC m=+12.088992943" observedRunningTime="2025-03-17 18:19:39.053680297 +0000 UTC m=+12.461050613" watchObservedRunningTime="2025-03-17 18:19:50.095730843 +0000 UTC m=+23.503101207" Mar 17 18:19:50.737824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff21e5e8ccf23d681a61d92b9f497dcbab3b79280f0ec870522623c5db1b1daf-rootfs.mount: Deactivated successfully. Mar 17 18:19:50.772239 env[1819]: time="2025-03-17T18:19:50.772141336Z" level=info msg="shim disconnected" id=ff21e5e8ccf23d681a61d92b9f497dcbab3b79280f0ec870522623c5db1b1daf Mar 17 18:19:50.772904 env[1819]: time="2025-03-17T18:19:50.772240646Z" level=warning msg="cleaning up after shim disconnected" id=ff21e5e8ccf23d681a61d92b9f497dcbab3b79280f0ec870522623c5db1b1daf namespace=k8s.io Mar 17 18:19:50.772904 env[1819]: time="2025-03-17T18:19:50.772279633Z" level=info msg="cleaning up dead shim" Mar 17 18:19:50.789023 env[1819]: time="2025-03-17T18:19:50.788957111Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:19:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3306 runtime=io.containerd.runc.v2\n" Mar 17 18:19:51.083144 env[1819]: time="2025-03-17T18:19:51.082935192Z" level=info msg="CreateContainer within sandbox \"4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:19:51.120773 env[1819]: time="2025-03-17T18:19:51.120698695Z" level=info msg="CreateContainer within sandbox \"4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7e85d94b372480d75e7a31f3ac03ba355291073f24b01434f40f539035c4bd1f\"" Mar 17 18:19:51.124939 env[1819]: time="2025-03-17T18:19:51.124841875Z" level=info msg="StartContainer for \"7e85d94b372480d75e7a31f3ac03ba355291073f24b01434f40f539035c4bd1f\"" Mar 17 18:19:51.170030 systemd[1]: Started cri-containerd-7e85d94b372480d75e7a31f3ac03ba355291073f24b01434f40f539035c4bd1f.scope. Mar 17 18:19:51.249360 env[1819]: time="2025-03-17T18:19:51.249290958Z" level=info msg="StartContainer for \"7e85d94b372480d75e7a31f3ac03ba355291073f24b01434f40f539035c4bd1f\" returns successfully" Mar 17 18:19:51.272904 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:19:51.274171 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:19:51.276939 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:19:51.281471 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:19:51.289124 systemd[1]: cri-containerd-7e85d94b372480d75e7a31f3ac03ba355291073f24b01434f40f539035c4bd1f.scope: Deactivated successfully. Mar 17 18:19:51.304083 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:19:51.346279 env[1819]: time="2025-03-17T18:19:51.345288303Z" level=info msg="shim disconnected" id=7e85d94b372480d75e7a31f3ac03ba355291073f24b01434f40f539035c4bd1f Mar 17 18:19:51.347016 env[1819]: time="2025-03-17T18:19:51.346968329Z" level=warning msg="cleaning up after shim disconnected" id=7e85d94b372480d75e7a31f3ac03ba355291073f24b01434f40f539035c4bd1f namespace=k8s.io Mar 17 18:19:51.347165 env[1819]: time="2025-03-17T18:19:51.347010820Z" level=info msg="cleaning up dead shim" Mar 17 18:19:51.361236 env[1819]: time="2025-03-17T18:19:51.361170439Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:19:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3369 runtime=io.containerd.runc.v2\n" Mar 17 18:19:51.737308 systemd[1]: run-containerd-runc-k8s.io-7e85d94b372480d75e7a31f3ac03ba355291073f24b01434f40f539035c4bd1f-runc.oiMe5J.mount: Deactivated successfully. Mar 17 18:19:51.737507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e85d94b372480d75e7a31f3ac03ba355291073f24b01434f40f539035c4bd1f-rootfs.mount: Deactivated successfully. Mar 17 18:19:52.082456 env[1819]: time="2025-03-17T18:19:52.082276476Z" level=info msg="CreateContainer within sandbox \"4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:19:52.112462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3011040098.mount: Deactivated successfully. Mar 17 18:19:52.130025 env[1819]: time="2025-03-17T18:19:52.129919657Z" level=info msg="CreateContainer within sandbox \"4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bf55e7f70e83f64a2a4042e95086c803b7cc8fd52db4a3c9d06f9ed288e4a080\"" Mar 17 18:19:52.132273 env[1819]: time="2025-03-17T18:19:52.132201029Z" level=info msg="StartContainer for \"bf55e7f70e83f64a2a4042e95086c803b7cc8fd52db4a3c9d06f9ed288e4a080\"" Mar 17 18:19:52.167264 systemd[1]: Started cri-containerd-bf55e7f70e83f64a2a4042e95086c803b7cc8fd52db4a3c9d06f9ed288e4a080.scope. Mar 17 18:19:52.236066 env[1819]: time="2025-03-17T18:19:52.236003696Z" level=info msg="StartContainer for \"bf55e7f70e83f64a2a4042e95086c803b7cc8fd52db4a3c9d06f9ed288e4a080\" returns successfully" Mar 17 18:19:52.239384 systemd[1]: cri-containerd-bf55e7f70e83f64a2a4042e95086c803b7cc8fd52db4a3c9d06f9ed288e4a080.scope: Deactivated successfully. Mar 17 18:19:52.282627 env[1819]: time="2025-03-17T18:19:52.282552750Z" level=info msg="shim disconnected" id=bf55e7f70e83f64a2a4042e95086c803b7cc8fd52db4a3c9d06f9ed288e4a080 Mar 17 18:19:52.282627 env[1819]: time="2025-03-17T18:19:52.282622985Z" level=warning msg="cleaning up after shim disconnected" id=bf55e7f70e83f64a2a4042e95086c803b7cc8fd52db4a3c9d06f9ed288e4a080 namespace=k8s.io Mar 17 18:19:52.283061 env[1819]: time="2025-03-17T18:19:52.282646360Z" level=info msg="cleaning up dead shim" Mar 17 18:19:52.296311 env[1819]: time="2025-03-17T18:19:52.296236685Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:19:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3428 runtime=io.containerd.runc.v2\n" Mar 17 18:19:52.737267 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf55e7f70e83f64a2a4042e95086c803b7cc8fd52db4a3c9d06f9ed288e4a080-rootfs.mount: Deactivated successfully. Mar 17 18:19:53.086638 env[1819]: time="2025-03-17T18:19:53.086177725Z" level=info msg="CreateContainer within sandbox \"4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:19:53.106973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2942660751.mount: Deactivated successfully. Mar 17 18:19:53.124416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount662628029.mount: Deactivated successfully. Mar 17 18:19:53.129891 env[1819]: time="2025-03-17T18:19:53.129775983Z" level=info msg="CreateContainer within sandbox \"4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2acec76e8b00e37201623c0c93c9a072dd71ea8e0abe65e6caec86ca864617ae\"" Mar 17 18:19:53.130994 env[1819]: time="2025-03-17T18:19:53.130939974Z" level=info msg="StartContainer for \"2acec76e8b00e37201623c0c93c9a072dd71ea8e0abe65e6caec86ca864617ae\"" Mar 17 18:19:53.168141 systemd[1]: Started cri-containerd-2acec76e8b00e37201623c0c93c9a072dd71ea8e0abe65e6caec86ca864617ae.scope. Mar 17 18:19:53.256467 systemd[1]: cri-containerd-2acec76e8b00e37201623c0c93c9a072dd71ea8e0abe65e6caec86ca864617ae.scope: Deactivated successfully. Mar 17 18:19:53.259987 env[1819]: time="2025-03-17T18:19:53.259524774Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f84c6b6_1fd1_4d1b_bb3e_c2c3bd640aec.slice/cri-containerd-2acec76e8b00e37201623c0c93c9a072dd71ea8e0abe65e6caec86ca864617ae.scope/memory.events\": no such file or directory" Mar 17 18:19:53.262376 env[1819]: time="2025-03-17T18:19:53.262295787Z" level=info msg="StartContainer for \"2acec76e8b00e37201623c0c93c9a072dd71ea8e0abe65e6caec86ca864617ae\" returns successfully" Mar 17 18:19:53.304578 env[1819]: time="2025-03-17T18:19:53.304500067Z" level=info msg="shim disconnected" id=2acec76e8b00e37201623c0c93c9a072dd71ea8e0abe65e6caec86ca864617ae Mar 17 18:19:53.304578 env[1819]: time="2025-03-17T18:19:53.304578066Z" level=warning msg="cleaning up after shim disconnected" id=2acec76e8b00e37201623c0c93c9a072dd71ea8e0abe65e6caec86ca864617ae namespace=k8s.io Mar 17 18:19:53.305005 env[1819]: time="2025-03-17T18:19:53.304600493Z" level=info msg="cleaning up dead shim" Mar 17 18:19:53.319913 env[1819]: time="2025-03-17T18:19:53.319838289Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:19:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3484 runtime=io.containerd.runc.v2\n" Mar 17 18:19:54.098030 env[1819]: time="2025-03-17T18:19:54.097961014Z" level=info msg="CreateContainer within sandbox \"4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:19:54.151381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3257919129.mount: Deactivated successfully. Mar 17 18:19:54.165215 env[1819]: time="2025-03-17T18:19:54.165142262Z" level=info msg="CreateContainer within sandbox \"4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2\"" Mar 17 18:19:54.165524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2314717420.mount: Deactivated successfully. Mar 17 18:19:54.168712 env[1819]: time="2025-03-17T18:19:54.166765017Z" level=info msg="StartContainer for \"c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2\"" Mar 17 18:19:54.200203 systemd[1]: Started cri-containerd-c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2.scope. Mar 17 18:19:54.273742 env[1819]: time="2025-03-17T18:19:54.273666847Z" level=info msg="StartContainer for \"c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2\" returns successfully" Mar 17 18:19:54.433369 kubelet[2675]: I0317 18:19:54.432971 2675 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 18:19:54.497240 systemd[1]: Created slice kubepods-burstable-podec88cd3d_219d_4090_b939_2f83c657cafd.slice. Mar 17 18:19:54.510149 systemd[1]: Created slice kubepods-burstable-pod41dd0041_764a_459e_8499_daa44e5c024b.slice. Mar 17 18:19:54.546845 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:19:54.622845 kubelet[2675]: I0317 18:19:54.622772 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec88cd3d-219d-4090-b939-2f83c657cafd-config-volume\") pod \"coredns-6f6b679f8f-ds8bj\" (UID: \"ec88cd3d-219d-4090-b939-2f83c657cafd\") " pod="kube-system/coredns-6f6b679f8f-ds8bj" Mar 17 18:19:54.623060 kubelet[2675]: I0317 18:19:54.622859 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cqk2\" (UniqueName: \"kubernetes.io/projected/41dd0041-764a-459e-8499-daa44e5c024b-kube-api-access-4cqk2\") pod \"coredns-6f6b679f8f-7v9cz\" (UID: \"41dd0041-764a-459e-8499-daa44e5c024b\") " pod="kube-system/coredns-6f6b679f8f-7v9cz" Mar 17 18:19:54.623060 kubelet[2675]: I0317 18:19:54.622912 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt6np\" (UniqueName: \"kubernetes.io/projected/ec88cd3d-219d-4090-b939-2f83c657cafd-kube-api-access-jt6np\") pod \"coredns-6f6b679f8f-ds8bj\" (UID: \"ec88cd3d-219d-4090-b939-2f83c657cafd\") " pod="kube-system/coredns-6f6b679f8f-ds8bj" Mar 17 18:19:54.623060 kubelet[2675]: I0317 18:19:54.622957 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41dd0041-764a-459e-8499-daa44e5c024b-config-volume\") pod \"coredns-6f6b679f8f-7v9cz\" (UID: \"41dd0041-764a-459e-8499-daa44e5c024b\") " pod="kube-system/coredns-6f6b679f8f-7v9cz" Mar 17 18:19:54.804314 env[1819]: time="2025-03-17T18:19:54.803734409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ds8bj,Uid:ec88cd3d-219d-4090-b939-2f83c657cafd,Namespace:kube-system,Attempt:0,}" Mar 17 18:19:54.829081 env[1819]: time="2025-03-17T18:19:54.829005931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7v9cz,Uid:41dd0041-764a-459e-8499-daa44e5c024b,Namespace:kube-system,Attempt:0,}" Mar 17 18:19:55.376843 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:19:57.376117 systemd-networkd[1534]: cilium_host: Link UP Mar 17 18:19:57.376443 systemd-networkd[1534]: cilium_net: Link UP Mar 17 18:19:57.376930 (udev-worker)[3619]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:19:57.381277 systemd-networkd[1534]: cilium_net: Gained carrier Mar 17 18:19:57.382745 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 17 18:19:57.383420 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:19:57.384038 systemd-networkd[1534]: cilium_host: Gained carrier Mar 17 18:19:57.384077 (udev-worker)[3675]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:19:57.570379 systemd[1]: run-containerd-runc-k8s.io-c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2-runc.KvODip.mount: Deactivated successfully. Mar 17 18:19:57.706860 systemd-networkd[1534]: cilium_vxlan: Link UP Mar 17 18:19:57.706874 systemd-networkd[1534]: cilium_vxlan: Gained carrier Mar 17 18:19:58.148017 systemd-networkd[1534]: cilium_net: Gained IPv6LL Mar 17 18:19:58.217849 kernel: NET: Registered PF_ALG protocol family Mar 17 18:19:58.339045 systemd-networkd[1534]: cilium_host: Gained IPv6LL Mar 17 18:19:59.428044 systemd-networkd[1534]: cilium_vxlan: Gained IPv6LL Mar 17 18:19:59.593737 systemd-networkd[1534]: lxc_health: Link UP Mar 17 18:19:59.600648 (udev-worker)[3683]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:19:59.610475 systemd-networkd[1534]: lxc_health: Gained carrier Mar 17 18:19:59.611025 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:19:59.807901 systemd[1]: run-containerd-runc-k8s.io-c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2-runc.Ed9GAY.mount: Deactivated successfully. Mar 17 18:20:00.385667 systemd-networkd[1534]: lxce18f79ba091b: Link UP Mar 17 18:20:00.401767 kernel: eth0: renamed from tmp7f850 Mar 17 18:20:00.415625 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce18f79ba091b: link becomes ready Mar 17 18:20:00.413093 systemd-networkd[1534]: lxce18f79ba091b: Gained carrier Mar 17 18:20:00.444006 systemd-networkd[1534]: lxccbb8bb72b713: Link UP Mar 17 18:20:00.445850 kernel: eth0: renamed from tmp9cdce Mar 17 18:20:00.452172 (udev-worker)[4059]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:20:00.457978 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccbb8bb72b713: link becomes ready Mar 17 18:20:00.456091 systemd-networkd[1534]: lxccbb8bb72b713: Gained carrier Mar 17 18:20:01.027617 systemd-networkd[1534]: lxc_health: Gained IPv6LL Mar 17 18:20:01.453397 kubelet[2675]: I0317 18:20:01.453309 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fr9kf" podStartSLOduration=12.301651379 podStartE2EDuration="28.453284612s" podCreationTimestamp="2025-03-17 18:19:33 +0000 UTC" firstStartedPulling="2025-03-17 18:19:33.564962852 +0000 UTC m=+6.972333180" lastFinishedPulling="2025-03-17 18:19:49.716596097 +0000 UTC m=+23.123966413" observedRunningTime="2025-03-17 18:19:55.12941496 +0000 UTC m=+28.536785300" watchObservedRunningTime="2025-03-17 18:20:01.453284612 +0000 UTC m=+34.860654976" Mar 17 18:20:02.116576 systemd-networkd[1534]: lxce18f79ba091b: Gained IPv6LL Mar 17 18:20:02.371462 systemd-networkd[1534]: lxccbb8bb72b713: Gained IPv6LL Mar 17 18:20:04.391205 systemd[1]: run-containerd-runc-k8s.io-c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2-runc.H6W3PU.mount: Deactivated successfully. Mar 17 18:20:06.623358 systemd[1]: run-containerd-runc-k8s.io-c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2-runc.AJgaSf.mount: Deactivated successfully. Mar 17 18:20:07.049132 sudo[2060]: pam_unix(sudo:session): session closed for user root Mar 17 18:20:07.073909 sshd[2057]: pam_unix(sshd:session): session closed for user core Mar 17 18:20:07.079288 systemd-logind[1804]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:20:07.082612 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:20:07.082994 systemd[1]: session-5.scope: Consumed 12.355s CPU time. Mar 17 18:20:07.083880 systemd[1]: sshd@4-172.31.23.140:22-139.178.89.65:47990.service: Deactivated successfully. Mar 17 18:20:07.088362 systemd-logind[1804]: Removed session 5. Mar 17 18:20:09.070183 env[1819]: time="2025-03-17T18:20:09.069292748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:20:09.070183 env[1819]: time="2025-03-17T18:20:09.069386443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:20:09.070183 env[1819]: time="2025-03-17T18:20:09.069414018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:20:09.070183 env[1819]: time="2025-03-17T18:20:09.069762734Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cdce217292bd089eb9de6ab772d594a710a364a28fe8c5c5706c3a3d87b7dfc pid=4177 runtime=io.containerd.runc.v2 Mar 17 18:20:09.106914 systemd[1]: Started cri-containerd-9cdce217292bd089eb9de6ab772d594a710a364a28fe8c5c5706c3a3d87b7dfc.scope. Mar 17 18:20:09.110780 env[1819]: time="2025-03-17T18:20:09.107161295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:20:09.110780 env[1819]: time="2025-03-17T18:20:09.107320833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:20:09.110780 env[1819]: time="2025-03-17T18:20:09.107415956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:20:09.110780 env[1819]: time="2025-03-17T18:20:09.107720860Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f85069922cb22e50ded4f0eca9f01ae59c94f4795c32b91fdd281e2d9be29da pid=4190 runtime=io.containerd.runc.v2 Mar 17 18:20:09.139977 systemd[1]: run-containerd-runc-k8s.io-9cdce217292bd089eb9de6ab772d594a710a364a28fe8c5c5706c3a3d87b7dfc-runc.C94t8i.mount: Deactivated successfully. Mar 17 18:20:09.195712 systemd[1]: Started cri-containerd-7f85069922cb22e50ded4f0eca9f01ae59c94f4795c32b91fdd281e2d9be29da.scope. Mar 17 18:20:09.312935 env[1819]: time="2025-03-17T18:20:09.310455427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7v9cz,Uid:41dd0041-764a-459e-8499-daa44e5c024b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cdce217292bd089eb9de6ab772d594a710a364a28fe8c5c5706c3a3d87b7dfc\"" Mar 17 18:20:09.322228 env[1819]: time="2025-03-17T18:20:09.320570673Z" level=info msg="CreateContainer within sandbox \"9cdce217292bd089eb9de6ab772d594a710a364a28fe8c5c5706c3a3d87b7dfc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:20:09.345871 env[1819]: time="2025-03-17T18:20:09.345760216Z" level=info msg="CreateContainer within sandbox \"9cdce217292bd089eb9de6ab772d594a710a364a28fe8c5c5706c3a3d87b7dfc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"689aaa9ee2560a12e53e014b1150925574209a1def519aaa62ddb9b359e45712\"" Mar 17 18:20:09.347572 env[1819]: time="2025-03-17T18:20:09.347504683Z" level=info msg="StartContainer for \"689aaa9ee2560a12e53e014b1150925574209a1def519aaa62ddb9b359e45712\"" Mar 17 18:20:09.408364 env[1819]: time="2025-03-17T18:20:09.407699318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ds8bj,Uid:ec88cd3d-219d-4090-b939-2f83c657cafd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f85069922cb22e50ded4f0eca9f01ae59c94f4795c32b91fdd281e2d9be29da\"" Mar 17 18:20:09.411189 systemd[1]: Started cri-containerd-689aaa9ee2560a12e53e014b1150925574209a1def519aaa62ddb9b359e45712.scope. Mar 17 18:20:09.424329 env[1819]: time="2025-03-17T18:20:09.424263697Z" level=info msg="CreateContainer within sandbox \"7f85069922cb22e50ded4f0eca9f01ae59c94f4795c32b91fdd281e2d9be29da\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:20:09.454827 env[1819]: time="2025-03-17T18:20:09.454708495Z" level=info msg="CreateContainer within sandbox \"7f85069922cb22e50ded4f0eca9f01ae59c94f4795c32b91fdd281e2d9be29da\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"48f7151a03a2a7c203298a98ea2c56bc44ef8a49a2fa3127877564dfe074e350\"" Mar 17 18:20:09.455831 env[1819]: time="2025-03-17T18:20:09.455749771Z" level=info msg="StartContainer for \"48f7151a03a2a7c203298a98ea2c56bc44ef8a49a2fa3127877564dfe074e350\"" Mar 17 18:20:09.506237 systemd[1]: Started cri-containerd-48f7151a03a2a7c203298a98ea2c56bc44ef8a49a2fa3127877564dfe074e350.scope. Mar 17 18:20:09.541135 env[1819]: time="2025-03-17T18:20:09.541053213Z" level=info msg="StartContainer for \"689aaa9ee2560a12e53e014b1150925574209a1def519aaa62ddb9b359e45712\" returns successfully" Mar 17 18:20:09.621277 env[1819]: time="2025-03-17T18:20:09.621139392Z" level=info msg="StartContainer for \"48f7151a03a2a7c203298a98ea2c56bc44ef8a49a2fa3127877564dfe074e350\" returns successfully" Mar 17 18:20:10.080255 systemd[1]: run-containerd-runc-k8s.io-7f85069922cb22e50ded4f0eca9f01ae59c94f4795c32b91fdd281e2d9be29da-runc.QvkxdY.mount: Deactivated successfully. Mar 17 18:20:10.186733 kubelet[2675]: I0317 18:20:10.186604 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-ds8bj" podStartSLOduration=39.186575983 podStartE2EDuration="39.186575983s" podCreationTimestamp="2025-03-17 18:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:20:10.184128683 +0000 UTC m=+43.591499035" watchObservedRunningTime="2025-03-17 18:20:10.186575983 +0000 UTC m=+43.593946311" Mar 17 18:20:10.187776 kubelet[2675]: I0317 18:20:10.187685 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-7v9cz" podStartSLOduration=39.187658383 podStartE2EDuration="39.187658383s" podCreationTimestamp="2025-03-17 18:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:20:10.161091322 +0000 UTC m=+43.568461650" watchObservedRunningTime="2025-03-17 18:20:10.187658383 +0000 UTC m=+43.595028711" Mar 17 18:20:46.870560 systemd[1]: Started sshd@5-172.31.23.140:22-139.178.89.65:55826.service. Mar 17 18:20:47.043536 sshd[4335]: Accepted publickey for core from 139.178.89.65 port 55826 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:20:47.046886 sshd[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:20:47.055156 systemd-logind[1804]: New session 6 of user core. Mar 17 18:20:47.056502 systemd[1]: Started session-6.scope. Mar 17 18:20:47.313445 sshd[4335]: pam_unix(sshd:session): session closed for user core Mar 17 18:20:47.319019 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:20:47.320237 systemd[1]: sshd@5-172.31.23.140:22-139.178.89.65:55826.service: Deactivated successfully. Mar 17 18:20:47.322070 systemd-logind[1804]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:20:47.324065 systemd-logind[1804]: Removed session 6. Mar 17 18:20:52.341253 systemd[1]: Started sshd@6-172.31.23.140:22-139.178.89.65:56014.service. Mar 17 18:20:52.516292 sshd[4348]: Accepted publickey for core from 139.178.89.65 port 56014 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:20:52.519046 sshd[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:20:52.528713 systemd[1]: Started session-7.scope. Mar 17 18:20:52.530160 systemd-logind[1804]: New session 7 of user core. Mar 17 18:20:52.780188 sshd[4348]: pam_unix(sshd:session): session closed for user core Mar 17 18:20:52.786933 systemd-logind[1804]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:20:52.787432 systemd[1]: sshd@6-172.31.23.140:22-139.178.89.65:56014.service: Deactivated successfully. Mar 17 18:20:52.789378 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:20:52.791504 systemd-logind[1804]: Removed session 7. Mar 17 18:20:57.811849 systemd[1]: Started sshd@7-172.31.23.140:22-139.178.89.65:56018.service. Mar 17 18:20:57.989303 sshd[4360]: Accepted publickey for core from 139.178.89.65 port 56018 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:20:57.992494 sshd[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:20:58.001679 systemd[1]: Started session-8.scope. Mar 17 18:20:58.003384 systemd-logind[1804]: New session 8 of user core. Mar 17 18:20:58.264431 sshd[4360]: pam_unix(sshd:session): session closed for user core Mar 17 18:20:58.270161 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:20:58.272684 systemd[1]: sshd@7-172.31.23.140:22-139.178.89.65:56018.service: Deactivated successfully. Mar 17 18:20:58.273755 systemd-logind[1804]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:20:58.276334 systemd-logind[1804]: Removed session 8. Mar 17 18:21:03.298370 systemd[1]: Started sshd@8-172.31.23.140:22-139.178.89.65:35906.service. Mar 17 18:21:03.475032 sshd[4374]: Accepted publickey for core from 139.178.89.65 port 35906 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:03.477618 sshd[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:03.486176 systemd-logind[1804]: New session 9 of user core. Mar 17 18:21:03.486852 systemd[1]: Started session-9.scope. Mar 17 18:21:03.740386 sshd[4374]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:03.747035 systemd-logind[1804]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:21:03.750635 systemd[1]: sshd@8-172.31.23.140:22-139.178.89.65:35906.service: Deactivated successfully. Mar 17 18:21:03.752053 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:21:03.753346 systemd-logind[1804]: Removed session 9. Mar 17 18:21:08.771516 systemd[1]: Started sshd@9-172.31.23.140:22-139.178.89.65:35918.service. Mar 17 18:21:08.945569 sshd[4387]: Accepted publickey for core from 139.178.89.65 port 35918 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:08.946779 sshd[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:08.954935 systemd-logind[1804]: New session 10 of user core. Mar 17 18:21:08.956668 systemd[1]: Started session-10.scope. Mar 17 18:21:09.218214 sshd[4387]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:09.225148 systemd[1]: sshd@9-172.31.23.140:22-139.178.89.65:35918.service: Deactivated successfully. Mar 17 18:21:09.226498 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:21:09.227860 systemd-logind[1804]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:21:09.230627 systemd-logind[1804]: Removed session 10. Mar 17 18:21:09.247296 systemd[1]: Started sshd@10-172.31.23.140:22-139.178.89.65:35920.service. Mar 17 18:21:09.428182 sshd[4399]: Accepted publickey for core from 139.178.89.65 port 35920 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:09.431460 sshd[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:09.441396 systemd[1]: Started session-11.scope. Mar 17 18:21:09.442792 systemd-logind[1804]: New session 11 of user core. Mar 17 18:21:09.768032 sshd[4399]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:09.773342 systemd[1]: sshd@10-172.31.23.140:22-139.178.89.65:35920.service: Deactivated successfully. Mar 17 18:21:09.774701 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:21:09.776986 systemd-logind[1804]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:21:09.780362 systemd-logind[1804]: Removed session 11. Mar 17 18:21:09.798742 systemd[1]: Started sshd@11-172.31.23.140:22-139.178.89.65:35934.service. Mar 17 18:21:09.979984 sshd[4409]: Accepted publickey for core from 139.178.89.65 port 35934 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:09.982731 sshd[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:09.990935 systemd-logind[1804]: New session 12 of user core. Mar 17 18:21:09.991678 systemd[1]: Started session-12.scope. Mar 17 18:21:10.248195 sshd[4409]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:10.253332 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:21:10.254539 systemd[1]: sshd@11-172.31.23.140:22-139.178.89.65:35934.service: Deactivated successfully. Mar 17 18:21:10.256535 systemd-logind[1804]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:21:10.259320 systemd-logind[1804]: Removed session 12. Mar 17 18:21:15.276199 systemd[1]: Started sshd@12-172.31.23.140:22-139.178.89.65:52620.service. Mar 17 18:21:15.451427 sshd[4421]: Accepted publickey for core from 139.178.89.65 port 52620 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:15.454116 sshd[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:15.463317 systemd-logind[1804]: New session 13 of user core. Mar 17 18:21:15.463514 systemd[1]: Started session-13.scope. Mar 17 18:21:15.713603 sshd[4421]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:15.718787 systemd-logind[1804]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:21:15.721485 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:21:15.722865 systemd[1]: sshd@12-172.31.23.140:22-139.178.89.65:52620.service: Deactivated successfully. Mar 17 18:21:15.724369 systemd-logind[1804]: Removed session 13. Mar 17 18:21:20.742552 systemd[1]: Started sshd@13-172.31.23.140:22-139.178.89.65:52634.service. Mar 17 18:21:20.914169 sshd[4433]: Accepted publickey for core from 139.178.89.65 port 52634 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:20.915350 sshd[4433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:20.924427 systemd[1]: Started session-14.scope. Mar 17 18:21:20.925696 systemd-logind[1804]: New session 14 of user core. Mar 17 18:21:21.173049 sshd[4433]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:21.178169 systemd[1]: sshd@13-172.31.23.140:22-139.178.89.65:52634.service: Deactivated successfully. Mar 17 18:21:21.179515 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:21:21.181436 systemd-logind[1804]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:21:21.184071 systemd-logind[1804]: Removed session 14. Mar 17 18:21:26.203697 systemd[1]: Started sshd@14-172.31.23.140:22-139.178.89.65:39026.service. Mar 17 18:21:26.384516 sshd[4445]: Accepted publickey for core from 139.178.89.65 port 39026 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:26.387194 sshd[4445]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:26.396093 systemd-logind[1804]: New session 15 of user core. Mar 17 18:21:26.396926 systemd[1]: Started session-15.scope. Mar 17 18:21:26.647208 sshd[4445]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:26.653040 systemd[1]: sshd@14-172.31.23.140:22-139.178.89.65:39026.service: Deactivated successfully. Mar 17 18:21:26.655181 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:21:26.657285 systemd-logind[1804]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:21:26.659830 systemd-logind[1804]: Removed session 15. Mar 17 18:21:31.675937 systemd[1]: Started sshd@15-172.31.23.140:22-139.178.89.65:51570.service. Mar 17 18:21:31.847953 sshd[4459]: Accepted publickey for core from 139.178.89.65 port 51570 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:31.851287 sshd[4459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:31.859779 systemd-logind[1804]: New session 16 of user core. Mar 17 18:21:31.860826 systemd[1]: Started session-16.scope. Mar 17 18:21:32.113738 sshd[4459]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:32.120464 systemd-logind[1804]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:21:32.121079 systemd[1]: sshd@15-172.31.23.140:22-139.178.89.65:51570.service: Deactivated successfully. Mar 17 18:21:32.122425 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:21:32.124527 systemd-logind[1804]: Removed session 16. Mar 17 18:21:32.143412 systemd[1]: Started sshd@16-172.31.23.140:22-139.178.89.65:51576.service. Mar 17 18:21:32.324733 sshd[4471]: Accepted publickey for core from 139.178.89.65 port 51576 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:32.328203 sshd[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:32.339363 systemd[1]: Started session-17.scope. Mar 17 18:21:32.341905 systemd-logind[1804]: New session 17 of user core. Mar 17 18:21:32.672180 sshd[4471]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:32.678522 systemd[1]: sshd@16-172.31.23.140:22-139.178.89.65:51576.service: Deactivated successfully. Mar 17 18:21:32.679924 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:21:32.682479 systemd-logind[1804]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:21:32.685112 systemd-logind[1804]: Removed session 17. Mar 17 18:21:32.701560 systemd[1]: Started sshd@17-172.31.23.140:22-139.178.89.65:51578.service. Mar 17 18:21:32.871014 sshd[4483]: Accepted publickey for core from 139.178.89.65 port 51578 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:32.874312 sshd[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:32.882907 systemd[1]: Started session-18.scope. Mar 17 18:21:32.884095 systemd-logind[1804]: New session 18 of user core. Mar 17 18:21:35.597001 sshd[4483]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:35.603032 systemd[1]: sshd@17-172.31.23.140:22-139.178.89.65:51578.service: Deactivated successfully. Mar 17 18:21:35.604556 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:21:35.605336 systemd-logind[1804]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:21:35.607888 systemd-logind[1804]: Removed session 18. Mar 17 18:21:35.627128 systemd[1]: Started sshd@18-172.31.23.140:22-139.178.89.65:51592.service. Mar 17 18:21:35.801850 sshd[4501]: Accepted publickey for core from 139.178.89.65 port 51592 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:35.804526 sshd[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:35.814019 systemd-logind[1804]: New session 19 of user core. Mar 17 18:21:35.815088 systemd[1]: Started session-19.scope. Mar 17 18:21:36.326020 sshd[4501]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:36.333694 systemd-logind[1804]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:21:36.334205 systemd[1]: sshd@18-172.31.23.140:22-139.178.89.65:51592.service: Deactivated successfully. Mar 17 18:21:36.335589 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:21:36.338031 systemd-logind[1804]: Removed session 19. Mar 17 18:21:36.357040 systemd[1]: Started sshd@19-172.31.23.140:22-139.178.89.65:51606.service. Mar 17 18:21:36.534429 sshd[4510]: Accepted publickey for core from 139.178.89.65 port 51606 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:36.537448 sshd[4510]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:36.546941 systemd[1]: Started session-20.scope. Mar 17 18:21:36.547831 systemd-logind[1804]: New session 20 of user core. Mar 17 18:21:36.795872 sshd[4510]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:36.800871 systemd[1]: sshd@19-172.31.23.140:22-139.178.89.65:51606.service: Deactivated successfully. Mar 17 18:21:36.802145 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:21:36.804195 systemd-logind[1804]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:21:36.806463 systemd-logind[1804]: Removed session 20. Mar 17 18:21:41.825106 systemd[1]: Started sshd@20-172.31.23.140:22-139.178.89.65:33398.service. Mar 17 18:21:41.999066 sshd[4522]: Accepted publickey for core from 139.178.89.65 port 33398 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:42.002482 sshd[4522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:42.010946 systemd-logind[1804]: New session 21 of user core. Mar 17 18:21:42.011394 systemd[1]: Started session-21.scope. Mar 17 18:21:42.257044 sshd[4522]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:42.262504 systemd-logind[1804]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:21:42.263115 systemd[1]: sshd@20-172.31.23.140:22-139.178.89.65:33398.service: Deactivated successfully. Mar 17 18:21:42.264433 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:21:42.266665 systemd-logind[1804]: Removed session 21. Mar 17 18:21:47.284784 systemd[1]: Started sshd@21-172.31.23.140:22-139.178.89.65:33412.service. Mar 17 18:21:47.458491 sshd[4537]: Accepted publickey for core from 139.178.89.65 port 33412 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:47.459632 sshd[4537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:47.467951 systemd-logind[1804]: New session 22 of user core. Mar 17 18:21:47.468242 systemd[1]: Started session-22.scope. Mar 17 18:21:47.716083 sshd[4537]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:47.721245 systemd-logind[1804]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:21:47.721863 systemd[1]: sshd@21-172.31.23.140:22-139.178.89.65:33412.service: Deactivated successfully. Mar 17 18:21:47.723184 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:21:47.725112 systemd-logind[1804]: Removed session 22. Mar 17 18:21:52.744559 systemd[1]: Started sshd@22-172.31.23.140:22-139.178.89.65:48646.service. Mar 17 18:21:52.916666 sshd[4549]: Accepted publickey for core from 139.178.89.65 port 48646 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:52.918690 sshd[4549]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:52.927722 systemd[1]: Started session-23.scope. Mar 17 18:21:52.928639 systemd-logind[1804]: New session 23 of user core. Mar 17 18:21:53.174361 sshd[4549]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:53.179866 systemd[1]: sshd@22-172.31.23.140:22-139.178.89.65:48646.service: Deactivated successfully. Mar 17 18:21:53.181166 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:21:53.183768 systemd-logind[1804]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:21:53.185857 systemd-logind[1804]: Removed session 23. Mar 17 18:21:58.206298 systemd[1]: Started sshd@23-172.31.23.140:22-139.178.89.65:48650.service. Mar 17 18:21:58.383720 sshd[4561]: Accepted publickey for core from 139.178.89.65 port 48650 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:58.386527 sshd[4561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:58.394514 systemd-logind[1804]: New session 24 of user core. Mar 17 18:21:58.395516 systemd[1]: Started session-24.scope. Mar 17 18:21:58.638329 sshd[4561]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:58.644189 systemd-logind[1804]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:21:58.644570 systemd[1]: sshd@23-172.31.23.140:22-139.178.89.65:48650.service: Deactivated successfully. Mar 17 18:21:58.645908 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:21:58.647638 systemd-logind[1804]: Removed session 24. Mar 17 18:21:58.668729 systemd[1]: Started sshd@24-172.31.23.140:22-139.178.89.65:48664.service. Mar 17 18:21:58.846634 sshd[4573]: Accepted publickey for core from 139.178.89.65 port 48664 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:58.849181 sshd[4573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:58.857292 systemd-logind[1804]: New session 25 of user core. Mar 17 18:21:58.858248 systemd[1]: Started session-25.scope. Mar 17 18:22:02.267047 systemd[1]: run-containerd-runc-k8s.io-c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2-runc.anK02z.mount: Deactivated successfully. Mar 17 18:22:02.277483 env[1819]: time="2025-03-17T18:22:02.277425733Z" level=info msg="StopContainer for \"721482b9c09c0b71d6c25b4956cdb4a84293aabdccfe95720ec1da2f7d5b11c0\" with timeout 30 (s)" Mar 17 18:22:02.281581 env[1819]: time="2025-03-17T18:22:02.278578604Z" level=info msg="Stop container \"721482b9c09c0b71d6c25b4956cdb4a84293aabdccfe95720ec1da2f7d5b11c0\" with signal terminated" Mar 17 18:22:02.306109 systemd[1]: cri-containerd-721482b9c09c0b71d6c25b4956cdb4a84293aabdccfe95720ec1da2f7d5b11c0.scope: Deactivated successfully. Mar 17 18:22:02.324886 env[1819]: time="2025-03-17T18:22:02.324111751Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:22:02.355107 env[1819]: time="2025-03-17T18:22:02.355040652Z" level=info msg="StopContainer for \"c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2\" with timeout 2 (s)" Mar 17 18:22:02.356096 env[1819]: time="2025-03-17T18:22:02.355778001Z" level=info msg="Stop container \"c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2\" with signal terminated" Mar 17 18:22:02.369993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-721482b9c09c0b71d6c25b4956cdb4a84293aabdccfe95720ec1da2f7d5b11c0-rootfs.mount: Deactivated successfully. Mar 17 18:22:02.390417 systemd-networkd[1534]: lxc_health: Link DOWN Mar 17 18:22:02.390431 systemd-networkd[1534]: lxc_health: Lost carrier Mar 17 18:22:02.395060 env[1819]: time="2025-03-17T18:22:02.394980968Z" level=info msg="shim disconnected" id=721482b9c09c0b71d6c25b4956cdb4a84293aabdccfe95720ec1da2f7d5b11c0 Mar 17 18:22:02.395292 env[1819]: time="2025-03-17T18:22:02.395067175Z" level=warning msg="cleaning up after shim disconnected" id=721482b9c09c0b71d6c25b4956cdb4a84293aabdccfe95720ec1da2f7d5b11c0 namespace=k8s.io Mar 17 18:22:02.395292 env[1819]: time="2025-03-17T18:22:02.395092363Z" level=info msg="cleaning up dead shim" Mar 17 18:22:02.422390 systemd[1]: cri-containerd-c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2.scope: Deactivated successfully. Mar 17 18:22:02.423050 systemd[1]: cri-containerd-c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2.scope: Consumed 14.822s CPU time. Mar 17 18:22:02.427072 env[1819]: time="2025-03-17T18:22:02.426985893Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4626 runtime=io.containerd.runc.v2\n" Mar 17 18:22:02.431160 env[1819]: time="2025-03-17T18:22:02.431086290Z" level=info msg="StopContainer for \"721482b9c09c0b71d6c25b4956cdb4a84293aabdccfe95720ec1da2f7d5b11c0\" returns successfully" Mar 17 18:22:02.432189 env[1819]: time="2025-03-17T18:22:02.432110594Z" level=info msg="StopPodSandbox for \"839ed5631ef7b2509752c497c88f8006b6a2d28f4d8d400f31806daf98b98e18\"" Mar 17 18:22:02.432468 env[1819]: time="2025-03-17T18:22:02.432221329Z" level=info msg="Container to stop \"721482b9c09c0b71d6c25b4956cdb4a84293aabdccfe95720ec1da2f7d5b11c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:22:02.436200 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-839ed5631ef7b2509752c497c88f8006b6a2d28f4d8d400f31806daf98b98e18-shm.mount: Deactivated successfully. Mar 17 18:22:02.474018 systemd[1]: cri-containerd-839ed5631ef7b2509752c497c88f8006b6a2d28f4d8d400f31806daf98b98e18.scope: Deactivated successfully. Mar 17 18:22:02.494402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2-rootfs.mount: Deactivated successfully. Mar 17 18:22:02.504378 env[1819]: time="2025-03-17T18:22:02.504306105Z" level=info msg="shim disconnected" id=c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2 Mar 17 18:22:02.504694 env[1819]: time="2025-03-17T18:22:02.504376773Z" level=warning msg="cleaning up after shim disconnected" id=c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2 namespace=k8s.io Mar 17 18:22:02.504694 env[1819]: time="2025-03-17T18:22:02.504399585Z" level=info msg="cleaning up dead shim" Mar 17 18:22:02.543975 env[1819]: time="2025-03-17T18:22:02.543779371Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4667 runtime=io.containerd.runc.v2\n" Mar 17 18:22:02.550331 env[1819]: time="2025-03-17T18:22:02.550257451Z" level=info msg="shim disconnected" id=839ed5631ef7b2509752c497c88f8006b6a2d28f4d8d400f31806daf98b98e18 Mar 17 18:22:02.550331 env[1819]: time="2025-03-17T18:22:02.550328058Z" level=warning msg="cleaning up after shim disconnected" id=839ed5631ef7b2509752c497c88f8006b6a2d28f4d8d400f31806daf98b98e18 namespace=k8s.io Mar 17 18:22:02.550696 env[1819]: time="2025-03-17T18:22:02.550350306Z" level=info msg="cleaning up dead shim" Mar 17 18:22:02.551958 env[1819]: time="2025-03-17T18:22:02.551896165Z" level=info msg="StopContainer for \"c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2\" returns successfully" Mar 17 18:22:02.552731 env[1819]: time="2025-03-17T18:22:02.552682426Z" level=info msg="StopPodSandbox for \"4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685\"" Mar 17 18:22:02.553300 env[1819]: time="2025-03-17T18:22:02.553064012Z" level=info msg="Container to stop \"7e85d94b372480d75e7a31f3ac03ba355291073f24b01434f40f539035c4bd1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:22:02.553538 env[1819]: time="2025-03-17T18:22:02.553470511Z" level=info msg="Container to stop \"ff21e5e8ccf23d681a61d92b9f497dcbab3b79280f0ec870522623c5db1b1daf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:22:02.553774 env[1819]: time="2025-03-17T18:22:02.553728738Z" level=info msg="Container to stop \"bf55e7f70e83f64a2a4042e95086c803b7cc8fd52db4a3c9d06f9ed288e4a080\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:22:02.553992 env[1819]: time="2025-03-17T18:22:02.553952441Z" level=info msg="Container to stop \"2acec76e8b00e37201623c0c93c9a072dd71ea8e0abe65e6caec86ca864617ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:22:02.554188 env[1819]: time="2025-03-17T18:22:02.554152168Z" level=info msg="Container to stop \"c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:22:02.573251 systemd[1]: cri-containerd-4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685.scope: Deactivated successfully. Mar 17 18:22:02.583480 env[1819]: time="2025-03-17T18:22:02.583414191Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4687 runtime=io.containerd.runc.v2\n" Mar 17 18:22:02.584305 env[1819]: time="2025-03-17T18:22:02.584235264Z" level=info msg="TearDown network for sandbox \"839ed5631ef7b2509752c497c88f8006b6a2d28f4d8d400f31806daf98b98e18\" successfully" Mar 17 18:22:02.584483 env[1819]: time="2025-03-17T18:22:02.584295960Z" level=info msg="StopPodSandbox for \"839ed5631ef7b2509752c497c88f8006b6a2d28f4d8d400f31806daf98b98e18\" returns successfully" Mar 17 18:22:02.632352 env[1819]: time="2025-03-17T18:22:02.632237102Z" level=info msg="shim disconnected" id=4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685 Mar 17 18:22:02.632981 env[1819]: time="2025-03-17T18:22:02.632945519Z" level=warning msg="cleaning up after shim disconnected" id=4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685 namespace=k8s.io Mar 17 18:22:02.633541 env[1819]: time="2025-03-17T18:22:02.633508125Z" level=info msg="cleaning up dead shim" Mar 17 18:22:02.649078 env[1819]: time="2025-03-17T18:22:02.649020684Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4719 runtime=io.containerd.runc.v2\n" Mar 17 18:22:02.649857 env[1819]: time="2025-03-17T18:22:02.649785129Z" level=info msg="TearDown network for sandbox \"4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685\" successfully" Mar 17 18:22:02.650049 env[1819]: time="2025-03-17T18:22:02.649999136Z" level=info msg="StopPodSandbox for \"4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685\" returns successfully" Mar 17 18:22:02.715167 kubelet[2675]: I0317 18:22:02.715118 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm54s\" (UniqueName: \"kubernetes.io/projected/500ada9e-79d4-43c8-8dc3-12df6c0f0dd4-kube-api-access-cm54s\") pod \"500ada9e-79d4-43c8-8dc3-12df6c0f0dd4\" (UID: \"500ada9e-79d4-43c8-8dc3-12df6c0f0dd4\") " Mar 17 18:22:02.715961 kubelet[2675]: I0317 18:22:02.715928 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/500ada9e-79d4-43c8-8dc3-12df6c0f0dd4-cilium-config-path\") pod \"500ada9e-79d4-43c8-8dc3-12df6c0f0dd4\" (UID: \"500ada9e-79d4-43c8-8dc3-12df6c0f0dd4\") " Mar 17 18:22:02.722009 kubelet[2675]: I0317 18:22:02.721951 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/500ada9e-79d4-43c8-8dc3-12df6c0f0dd4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "500ada9e-79d4-43c8-8dc3-12df6c0f0dd4" (UID: "500ada9e-79d4-43c8-8dc3-12df6c0f0dd4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:22:02.725422 kubelet[2675]: I0317 18:22:02.725293 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/500ada9e-79d4-43c8-8dc3-12df6c0f0dd4-kube-api-access-cm54s" (OuterVolumeSpecName: "kube-api-access-cm54s") pod "500ada9e-79d4-43c8-8dc3-12df6c0f0dd4" (UID: "500ada9e-79d4-43c8-8dc3-12df6c0f0dd4"). InnerVolumeSpecName "kube-api-access-cm54s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:22:02.816470 kubelet[2675]: I0317 18:22:02.816332 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-clustermesh-secrets\") pod \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " Mar 17 18:22:02.817257 kubelet[2675]: I0317 18:22:02.817205 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-lib-modules\") pod \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " Mar 17 18:22:02.817571 kubelet[2675]: I0317 18:22:02.817546 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-hostproc\") pod \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " Mar 17 18:22:02.817751 kubelet[2675]: I0317 18:22:02.817726 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-cni-path\") pod \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " Mar 17 18:22:02.818022 kubelet[2675]: I0317 18:22:02.817997 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-cilium-run\") pod \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " Mar 17 18:22:02.818420 kubelet[2675]: I0317 18:22:02.818381 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-host-proc-sys-net\") pod \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " Mar 17 18:22:02.818642 kubelet[2675]: I0317 18:22:02.817451 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec" (UID: "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:02.818817 kubelet[2675]: I0317 18:22:02.818229 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-hostproc" (OuterVolumeSpecName: "hostproc") pod "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec" (UID: "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:02.818960 kubelet[2675]: I0317 18:22:02.818280 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-cni-path" (OuterVolumeSpecName: "cni-path") pod "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec" (UID: "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:02.819090 kubelet[2675]: I0317 18:22:02.818307 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec" (UID: "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:02.819229 kubelet[2675]: I0317 18:22:02.818563 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec" (UID: "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:02.819497 kubelet[2675]: I0317 18:22:02.819453 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec" (UID: "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:02.819714 kubelet[2675]: I0317 18:22:02.819670 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-etc-cni-netd\") pod \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " Mar 17 18:22:02.820054 kubelet[2675]: I0317 18:22:02.820002 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec" (UID: "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:02.820212 kubelet[2675]: I0317 18:22:02.819938 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-host-proc-sys-kernel\") pod \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " Mar 17 18:22:02.820936 kubelet[2675]: I0317 18:22:02.820381 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-hubble-tls\") pod \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " Mar 17 18:22:02.821225 kubelet[2675]: I0317 18:22:02.821174 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-bpf-maps\") pod \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " Mar 17 18:22:02.821508 kubelet[2675]: I0317 18:22:02.821483 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t6jl\" (UniqueName: \"kubernetes.io/projected/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-kube-api-access-2t6jl\") pod \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " Mar 17 18:22:02.821688 kubelet[2675]: I0317 18:22:02.821663 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-xtables-lock\") pod \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " Mar 17 18:22:02.821904 kubelet[2675]: I0317 18:22:02.821859 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-cilium-config-path\") pod \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " Mar 17 18:22:02.822161 kubelet[2675]: I0317 18:22:02.822115 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-cilium-cgroup\") pod \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\" (UID: \"9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec\") " Mar 17 18:22:02.823286 kubelet[2675]: I0317 18:22:02.823249 2675 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/500ada9e-79d4-43c8-8dc3-12df6c0f0dd4-cilium-config-path\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:02.823522 kubelet[2675]: I0317 18:22:02.823495 2675 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-cilium-run\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:02.823702 kubelet[2675]: I0317 18:22:02.823675 2675 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cm54s\" (UniqueName: \"kubernetes.io/projected/500ada9e-79d4-43c8-8dc3-12df6c0f0dd4-kube-api-access-cm54s\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:02.823927 kubelet[2675]: I0317 18:22:02.823901 2675 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-host-proc-sys-net\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:02.824110 kubelet[2675]: I0317 18:22:02.824085 2675 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-etc-cni-netd\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:02.824277 kubelet[2675]: I0317 18:22:02.824246 2675 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-host-proc-sys-kernel\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:02.824440 kubelet[2675]: I0317 18:22:02.824418 2675 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-lib-modules\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:02.824582 kubelet[2675]: I0317 18:22:02.824561 2675 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-hostproc\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:02.824728 kubelet[2675]: I0317 18:22:02.824706 2675 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-cni-path\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:02.824909 kubelet[2675]: I0317 18:22:02.821383 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec" (UID: "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:02.825067 kubelet[2675]: I0317 18:22:02.822340 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec" (UID: "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:02.825210 kubelet[2675]: I0317 18:22:02.823107 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec" (UID: "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:22:02.825427 kubelet[2675]: I0317 18:22:02.825373 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec" (UID: "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:02.828191 kubelet[2675]: I0317 18:22:02.828133 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec" (UID: "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:22:02.831231 kubelet[2675]: I0317 18:22:02.831158 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-kube-api-access-2t6jl" (OuterVolumeSpecName: "kube-api-access-2t6jl") pod "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec" (UID: "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec"). InnerVolumeSpecName "kube-api-access-2t6jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:22:02.834278 kubelet[2675]: I0317 18:22:02.834225 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec" (UID: "9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:22:02.925490 systemd[1]: Removed slice kubepods-besteffort-pod500ada9e_79d4_43c8_8dc3_12df6c0f0dd4.slice. Mar 17 18:22:02.928737 kubelet[2675]: I0317 18:22:02.928697 2675 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2t6jl\" (UniqueName: \"kubernetes.io/projected/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-kube-api-access-2t6jl\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:02.929315 kubelet[2675]: I0317 18:22:02.929268 2675 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-xtables-lock\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:02.930389 systemd[1]: Removed slice kubepods-burstable-pod9f84c6b6_1fd1_4d1b_bb3e_c2c3bd640aec.slice. Mar 17 18:22:02.930617 systemd[1]: kubepods-burstable-pod9f84c6b6_1fd1_4d1b_bb3e_c2c3bd640aec.slice: Consumed 15.053s CPU time. Mar 17 18:22:02.931411 kubelet[2675]: I0317 18:22:02.931069 2675 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-hubble-tls\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:02.931411 kubelet[2675]: I0317 18:22:02.931114 2675 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-bpf-maps\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:02.931411 kubelet[2675]: I0317 18:22:02.931136 2675 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-cilium-cgroup\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:02.931411 kubelet[2675]: I0317 18:22:02.931167 2675 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-cilium-config-path\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:02.931411 kubelet[2675]: I0317 18:22:02.931190 2675 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec-clustermesh-secrets\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:03.255320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685-rootfs.mount: Deactivated successfully. Mar 17 18:22:03.255497 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685-shm.mount: Deactivated successfully. Mar 17 18:22:03.255634 systemd[1]: var-lib-kubelet-pods-9f84c6b6\x2d1fd1\x2d4d1b\x2dbb3e\x2dc2c3bd640aec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2t6jl.mount: Deactivated successfully. Mar 17 18:22:03.255770 systemd[1]: var-lib-kubelet-pods-9f84c6b6\x2d1fd1\x2d4d1b\x2dbb3e\x2dc2c3bd640aec-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:22:03.255947 systemd[1]: var-lib-kubelet-pods-9f84c6b6\x2d1fd1\x2d4d1b\x2dbb3e\x2dc2c3bd640aec-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:22:03.256116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-839ed5631ef7b2509752c497c88f8006b6a2d28f4d8d400f31806daf98b98e18-rootfs.mount: Deactivated successfully. Mar 17 18:22:03.256268 systemd[1]: var-lib-kubelet-pods-500ada9e\x2d79d4\x2d43c8\x2d8dc3\x2d12df6c0f0dd4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcm54s.mount: Deactivated successfully. Mar 17 18:22:03.465995 kubelet[2675]: I0317 18:22:03.465373 2675 scope.go:117] "RemoveContainer" containerID="721482b9c09c0b71d6c25b4956cdb4a84293aabdccfe95720ec1da2f7d5b11c0" Mar 17 18:22:03.473400 env[1819]: time="2025-03-17T18:22:03.472870908Z" level=info msg="RemoveContainer for \"721482b9c09c0b71d6c25b4956cdb4a84293aabdccfe95720ec1da2f7d5b11c0\"" Mar 17 18:22:03.482383 env[1819]: time="2025-03-17T18:22:03.482290404Z" level=info msg="RemoveContainer for \"721482b9c09c0b71d6c25b4956cdb4a84293aabdccfe95720ec1da2f7d5b11c0\" returns successfully" Mar 17 18:22:03.484327 kubelet[2675]: I0317 18:22:03.484292 2675 scope.go:117] "RemoveContainer" containerID="c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2" Mar 17 18:22:03.494957 env[1819]: time="2025-03-17T18:22:03.494750365Z" level=info msg="RemoveContainer for \"c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2\"" Mar 17 18:22:03.503092 env[1819]: time="2025-03-17T18:22:03.502060066Z" level=info msg="RemoveContainer for \"c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2\" returns successfully" Mar 17 18:22:03.503294 kubelet[2675]: I0317 18:22:03.502713 2675 scope.go:117] "RemoveContainer" containerID="2acec76e8b00e37201623c0c93c9a072dd71ea8e0abe65e6caec86ca864617ae" Mar 17 18:22:03.506305 env[1819]: time="2025-03-17T18:22:03.506143375Z" level=info msg="RemoveContainer for \"2acec76e8b00e37201623c0c93c9a072dd71ea8e0abe65e6caec86ca864617ae\"" Mar 17 18:22:03.518504 env[1819]: time="2025-03-17T18:22:03.518428761Z" level=info msg="RemoveContainer for \"2acec76e8b00e37201623c0c93c9a072dd71ea8e0abe65e6caec86ca864617ae\" returns successfully" Mar 17 18:22:03.519969 kubelet[2675]: I0317 18:22:03.519927 2675 scope.go:117] "RemoveContainer" containerID="bf55e7f70e83f64a2a4042e95086c803b7cc8fd52db4a3c9d06f9ed288e4a080" Mar 17 18:22:03.525471 env[1819]: time="2025-03-17T18:22:03.525011552Z" level=info msg="RemoveContainer for \"bf55e7f70e83f64a2a4042e95086c803b7cc8fd52db4a3c9d06f9ed288e4a080\"" Mar 17 18:22:03.529389 env[1819]: time="2025-03-17T18:22:03.529334092Z" level=info msg="RemoveContainer for \"bf55e7f70e83f64a2a4042e95086c803b7cc8fd52db4a3c9d06f9ed288e4a080\" returns successfully" Mar 17 18:22:03.529985 kubelet[2675]: I0317 18:22:03.529937 2675 scope.go:117] "RemoveContainer" containerID="7e85d94b372480d75e7a31f3ac03ba355291073f24b01434f40f539035c4bd1f" Mar 17 18:22:03.532484 env[1819]: time="2025-03-17T18:22:03.532429588Z" level=info msg="RemoveContainer for \"7e85d94b372480d75e7a31f3ac03ba355291073f24b01434f40f539035c4bd1f\"" Mar 17 18:22:03.536599 env[1819]: time="2025-03-17T18:22:03.536521189Z" level=info msg="RemoveContainer for \"7e85d94b372480d75e7a31f3ac03ba355291073f24b01434f40f539035c4bd1f\" returns successfully" Mar 17 18:22:03.537100 kubelet[2675]: I0317 18:22:03.537003 2675 scope.go:117] "RemoveContainer" containerID="ff21e5e8ccf23d681a61d92b9f497dcbab3b79280f0ec870522623c5db1b1daf" Mar 17 18:22:03.539455 env[1819]: time="2025-03-17T18:22:03.539399366Z" level=info msg="RemoveContainer for \"ff21e5e8ccf23d681a61d92b9f497dcbab3b79280f0ec870522623c5db1b1daf\"" Mar 17 18:22:03.543483 env[1819]: time="2025-03-17T18:22:03.543409259Z" level=info msg="RemoveContainer for \"ff21e5e8ccf23d681a61d92b9f497dcbab3b79280f0ec870522623c5db1b1daf\" returns successfully" Mar 17 18:22:03.543852 kubelet[2675]: I0317 18:22:03.543785 2675 scope.go:117] "RemoveContainer" containerID="c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2" Mar 17 18:22:03.544514 env[1819]: time="2025-03-17T18:22:03.544348831Z" level=error msg="ContainerStatus for \"c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2\": not found" Mar 17 18:22:03.545023 kubelet[2675]: E0317 18:22:03.544958 2675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2\": not found" containerID="c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2" Mar 17 18:22:03.545127 kubelet[2675]: I0317 18:22:03.545020 2675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2"} err="failed to get container status \"c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"c909ca45101deb5197cf7b11bd368c52e12704d02f0bc59ec78d99bd1418a3d2\": not found" Mar 17 18:22:03.545194 kubelet[2675]: I0317 18:22:03.545139 2675 scope.go:117] "RemoveContainer" containerID="2acec76e8b00e37201623c0c93c9a072dd71ea8e0abe65e6caec86ca864617ae" Mar 17 18:22:03.545628 env[1819]: time="2025-03-17T18:22:03.545527935Z" level=error msg="ContainerStatus for \"2acec76e8b00e37201623c0c93c9a072dd71ea8e0abe65e6caec86ca864617ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2acec76e8b00e37201623c0c93c9a072dd71ea8e0abe65e6caec86ca864617ae\": not found" Mar 17 18:22:03.545965 kubelet[2675]: E0317 18:22:03.545929 2675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2acec76e8b00e37201623c0c93c9a072dd71ea8e0abe65e6caec86ca864617ae\": not found" containerID="2acec76e8b00e37201623c0c93c9a072dd71ea8e0abe65e6caec86ca864617ae" Mar 17 18:22:03.546174 kubelet[2675]: I0317 18:22:03.546136 2675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2acec76e8b00e37201623c0c93c9a072dd71ea8e0abe65e6caec86ca864617ae"} err="failed to get container status \"2acec76e8b00e37201623c0c93c9a072dd71ea8e0abe65e6caec86ca864617ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"2acec76e8b00e37201623c0c93c9a072dd71ea8e0abe65e6caec86ca864617ae\": not found" Mar 17 18:22:03.546314 kubelet[2675]: I0317 18:22:03.546291 2675 scope.go:117] "RemoveContainer" containerID="bf55e7f70e83f64a2a4042e95086c803b7cc8fd52db4a3c9d06f9ed288e4a080" Mar 17 18:22:03.546870 env[1819]: time="2025-03-17T18:22:03.546760246Z" level=error msg="ContainerStatus for \"bf55e7f70e83f64a2a4042e95086c803b7cc8fd52db4a3c9d06f9ed288e4a080\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bf55e7f70e83f64a2a4042e95086c803b7cc8fd52db4a3c9d06f9ed288e4a080\": not found" Mar 17 18:22:03.547259 kubelet[2675]: E0317 18:22:03.547219 2675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bf55e7f70e83f64a2a4042e95086c803b7cc8fd52db4a3c9d06f9ed288e4a080\": not found" containerID="bf55e7f70e83f64a2a4042e95086c803b7cc8fd52db4a3c9d06f9ed288e4a080" Mar 17 18:22:03.547364 kubelet[2675]: I0317 18:22:03.547270 2675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bf55e7f70e83f64a2a4042e95086c803b7cc8fd52db4a3c9d06f9ed288e4a080"} err="failed to get container status \"bf55e7f70e83f64a2a4042e95086c803b7cc8fd52db4a3c9d06f9ed288e4a080\": rpc error: code = NotFound desc = an error occurred when try to find container \"bf55e7f70e83f64a2a4042e95086c803b7cc8fd52db4a3c9d06f9ed288e4a080\": not found" Mar 17 18:22:03.547364 kubelet[2675]: I0317 18:22:03.547305 2675 scope.go:117] "RemoveContainer" containerID="7e85d94b372480d75e7a31f3ac03ba355291073f24b01434f40f539035c4bd1f" Mar 17 18:22:03.547760 env[1819]: time="2025-03-17T18:22:03.547611127Z" level=error msg="ContainerStatus for \"7e85d94b372480d75e7a31f3ac03ba355291073f24b01434f40f539035c4bd1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e85d94b372480d75e7a31f3ac03ba355291073f24b01434f40f539035c4bd1f\": not found" Mar 17 18:22:03.547998 kubelet[2675]: E0317 18:22:03.547954 2675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e85d94b372480d75e7a31f3ac03ba355291073f24b01434f40f539035c4bd1f\": not found" containerID="7e85d94b372480d75e7a31f3ac03ba355291073f24b01434f40f539035c4bd1f" Mar 17 18:22:03.548105 kubelet[2675]: I0317 18:22:03.548004 2675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e85d94b372480d75e7a31f3ac03ba355291073f24b01434f40f539035c4bd1f"} err="failed to get container status \"7e85d94b372480d75e7a31f3ac03ba355291073f24b01434f40f539035c4bd1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e85d94b372480d75e7a31f3ac03ba355291073f24b01434f40f539035c4bd1f\": not found" Mar 17 18:22:03.548105 kubelet[2675]: I0317 18:22:03.548037 2675 scope.go:117] "RemoveContainer" containerID="ff21e5e8ccf23d681a61d92b9f497dcbab3b79280f0ec870522623c5db1b1daf" Mar 17 18:22:03.548540 env[1819]: time="2025-03-17T18:22:03.548463100Z" level=error msg="ContainerStatus for \"ff21e5e8ccf23d681a61d92b9f497dcbab3b79280f0ec870522623c5db1b1daf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff21e5e8ccf23d681a61d92b9f497dcbab3b79280f0ec870522623c5db1b1daf\": not found" Mar 17 18:22:03.548930 kubelet[2675]: E0317 18:22:03.548892 2675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff21e5e8ccf23d681a61d92b9f497dcbab3b79280f0ec870522623c5db1b1daf\": not found" containerID="ff21e5e8ccf23d681a61d92b9f497dcbab3b79280f0ec870522623c5db1b1daf" Mar 17 18:22:03.549022 kubelet[2675]: I0317 18:22:03.548950 2675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ff21e5e8ccf23d681a61d92b9f497dcbab3b79280f0ec870522623c5db1b1daf"} err="failed to get container status \"ff21e5e8ccf23d681a61d92b9f497dcbab3b79280f0ec870522623c5db1b1daf\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff21e5e8ccf23d681a61d92b9f497dcbab3b79280f0ec870522623c5db1b1daf\": not found" Mar 17 18:22:04.152406 sshd[4573]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:04.157313 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 18:22:04.157647 systemd[1]: session-25.scope: Consumed 2.550s CPU time. Mar 17 18:22:04.158522 systemd[1]: sshd@24-172.31.23.140:22-139.178.89.65:48664.service: Deactivated successfully. Mar 17 18:22:04.160919 systemd-logind[1804]: Session 25 logged out. Waiting for processes to exit. Mar 17 18:22:04.163225 systemd-logind[1804]: Removed session 25. Mar 17 18:22:04.178876 systemd[1]: Started sshd@25-172.31.23.140:22-139.178.89.65:41190.service. Mar 17 18:22:04.346770 sshd[4738]: Accepted publickey for core from 139.178.89.65 port 41190 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:04.350087 sshd[4738]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:04.358498 systemd-logind[1804]: New session 26 of user core. Mar 17 18:22:04.359519 systemd[1]: Started session-26.scope. Mar 17 18:22:04.917620 kubelet[2675]: I0317 18:22:04.917563 2675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="500ada9e-79d4-43c8-8dc3-12df6c0f0dd4" path="/var/lib/kubelet/pods/500ada9e-79d4-43c8-8dc3-12df6c0f0dd4/volumes" Mar 17 18:22:04.918781 kubelet[2675]: I0317 18:22:04.918731 2675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec" path="/var/lib/kubelet/pods/9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec/volumes" Mar 17 18:22:05.721161 sshd[4738]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:05.727621 systemd[1]: sshd@25-172.31.23.140:22-139.178.89.65:41190.service: Deactivated successfully. Mar 17 18:22:05.729020 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 18:22:05.729320 systemd[1]: session-26.scope: Consumed 1.139s CPU time. Mar 17 18:22:05.731078 systemd-logind[1804]: Session 26 logged out. Waiting for processes to exit. Mar 17 18:22:05.733630 systemd-logind[1804]: Removed session 26. Mar 17 18:22:05.753409 kubelet[2675]: E0317 18:22:05.753355 2675 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec" containerName="mount-bpf-fs" Mar 17 18:22:05.753684 kubelet[2675]: E0317 18:22:05.753655 2675 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec" containerName="cilium-agent" Mar 17 18:22:05.753877 kubelet[2675]: E0317 18:22:05.753852 2675 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec" containerName="apply-sysctl-overwrites" Mar 17 18:22:05.754058 kubelet[2675]: E0317 18:22:05.754034 2675 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec" containerName="clean-cilium-state" Mar 17 18:22:05.754197 kubelet[2675]: E0317 18:22:05.754175 2675 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="500ada9e-79d4-43c8-8dc3-12df6c0f0dd4" containerName="cilium-operator" Mar 17 18:22:05.754360 kubelet[2675]: E0317 18:22:05.754338 2675 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec" containerName="mount-cgroup" Mar 17 18:22:05.754559 kubelet[2675]: I0317 18:22:05.754516 2675 memory_manager.go:354] "RemoveStaleState removing state" podUID="500ada9e-79d4-43c8-8dc3-12df6c0f0dd4" containerName="cilium-operator" Mar 17 18:22:05.754720 kubelet[2675]: I0317 18:22:05.754697 2675 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f84c6b6-1fd1-4d1b-bb3e-c2c3bd640aec" containerName="cilium-agent" Mar 17 18:22:05.757063 systemd[1]: Started sshd@26-172.31.23.140:22-139.178.89.65:41206.service. Mar 17 18:22:05.779927 kubelet[2675]: W0317 18:22:05.779869 2675 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-23-140" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-140' and this object Mar 17 18:22:05.780465 kubelet[2675]: E0317 18:22:05.780416 2675 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ip-172-31-23-140\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-23-140' and this object" logger="UnhandledError" Mar 17 18:22:05.780616 kubelet[2675]: W0317 18:22:05.780278 2675 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-23-140" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-140' and this object Mar 17 18:22:05.780764 kubelet[2675]: E0317 18:22:05.780735 2675 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-23-140\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-23-140' and this object" logger="UnhandledError" Mar 17 18:22:05.780901 kubelet[2675]: W0317 18:22:05.780356 2675 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-23-140" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-140' and this object Mar 17 18:22:05.781125 kubelet[2675]: E0317 18:22:05.781088 2675 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-23-140\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-23-140' and this object" logger="UnhandledError" Mar 17 18:22:05.781491 kubelet[2675]: W0317 18:22:05.781452 2675 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-23-140" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-140' and this object Mar 17 18:22:05.781883 kubelet[2675]: E0317 18:22:05.781702 2675 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-23-140\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-23-140' and this object" logger="UnhandledError" Mar 17 18:22:05.786329 systemd[1]: Created slice kubepods-burstable-pode4eec020_ed43_49f6_9a7c_7a7fd1545a06.slice. Mar 17 18:22:05.850340 kubelet[2675]: I0317 18:22:05.850281 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-xtables-lock\") pod \"cilium-s56xm\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " pod="kube-system/cilium-s56xm" Mar 17 18:22:05.850639 kubelet[2675]: I0317 18:22:05.850597 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-clustermesh-secrets\") pod \"cilium-s56xm\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " pod="kube-system/cilium-s56xm" Mar 17 18:22:05.850986 kubelet[2675]: I0317 18:22:05.850907 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cilium-ipsec-secrets\") pod \"cilium-s56xm\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " pod="kube-system/cilium-s56xm" Mar 17 18:22:05.851279 kubelet[2675]: I0317 18:22:05.851203 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-host-proc-sys-kernel\") pod \"cilium-s56xm\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " pod="kube-system/cilium-s56xm" Mar 17 18:22:05.851477 kubelet[2675]: I0317 18:22:05.851438 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cilium-run\") pod \"cilium-s56xm\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " pod="kube-system/cilium-s56xm" Mar 17 18:22:05.851682 kubelet[2675]: I0317 18:22:05.851643 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-bpf-maps\") pod \"cilium-s56xm\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " pod="kube-system/cilium-s56xm" Mar 17 18:22:05.851965 kubelet[2675]: I0317 18:22:05.851892 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-lib-modules\") pod \"cilium-s56xm\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " pod="kube-system/cilium-s56xm" Mar 17 18:22:05.852219 kubelet[2675]: I0317 18:22:05.852139 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cilium-config-path\") pod \"cilium-s56xm\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " pod="kube-system/cilium-s56xm" Mar 17 18:22:05.852493 kubelet[2675]: I0317 18:22:05.852413 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-hostproc\") pod \"cilium-s56xm\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " pod="kube-system/cilium-s56xm" Mar 17 18:22:05.852691 kubelet[2675]: I0317 18:22:05.852653 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cilium-cgroup\") pod \"cilium-s56xm\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " pod="kube-system/cilium-s56xm" Mar 17 18:22:05.852951 kubelet[2675]: I0317 18:22:05.852885 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-host-proc-sys-net\") pod \"cilium-s56xm\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " pod="kube-system/cilium-s56xm" Mar 17 18:22:05.853192 kubelet[2675]: I0317 18:22:05.853126 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cni-path\") pod \"cilium-s56xm\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " pod="kube-system/cilium-s56xm" Mar 17 18:22:05.853425 kubelet[2675]: I0317 18:22:05.853345 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-etc-cni-netd\") pod \"cilium-s56xm\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " pod="kube-system/cilium-s56xm" Mar 17 18:22:05.853612 kubelet[2675]: I0317 18:22:05.853574 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-hubble-tls\") pod \"cilium-s56xm\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " pod="kube-system/cilium-s56xm" Mar 17 18:22:05.853865 kubelet[2675]: I0317 18:22:05.853785 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwj52\" (UniqueName: \"kubernetes.io/projected/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-kube-api-access-rwj52\") pod \"cilium-s56xm\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " pod="kube-system/cilium-s56xm" Mar 17 18:22:05.955224 sshd[4748]: Accepted publickey for core from 139.178.89.65 port 41206 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:05.958009 sshd[4748]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:05.967453 systemd[1]: Started session-27.scope. Mar 17 18:22:05.970689 systemd-logind[1804]: New session 27 of user core. Mar 17 18:22:06.265256 kubelet[2675]: E0317 18:22:06.265174 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path cilium-ipsec-secrets clustermesh-secrets hubble-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-s56xm" podUID="e4eec020-ed43-49f6-9a7c-7a7fd1545a06" Mar 17 18:22:06.266736 sshd[4748]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:06.271974 systemd[1]: sshd@26-172.31.23.140:22-139.178.89.65:41206.service: Deactivated successfully. Mar 17 18:22:06.273292 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 18:22:06.275105 systemd-logind[1804]: Session 27 logged out. Waiting for processes to exit. Mar 17 18:22:06.277424 systemd-logind[1804]: Removed session 27. Mar 17 18:22:06.297063 systemd[1]: Started sshd@27-172.31.23.140:22-139.178.89.65:41212.service. Mar 17 18:22:06.475968 sshd[4761]: Accepted publickey for core from 139.178.89.65 port 41212 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:06.477771 sshd[4761]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:06.490671 systemd[1]: Started session-28.scope. Mar 17 18:22:06.491466 systemd-logind[1804]: New session 28 of user core. Mar 17 18:22:06.663007 kubelet[2675]: I0317 18:22:06.662852 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-etc-cni-netd\") pod \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " Mar 17 18:22:06.663007 kubelet[2675]: I0317 18:22:06.662933 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-xtables-lock\") pod \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " Mar 17 18:22:06.663007 kubelet[2675]: I0317 18:22:06.662983 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwj52\" (UniqueName: \"kubernetes.io/projected/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-kube-api-access-rwj52\") pod \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " Mar 17 18:22:06.663291 kubelet[2675]: I0317 18:22:06.663037 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-bpf-maps\") pod \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " Mar 17 18:22:06.663291 kubelet[2675]: I0317 18:22:06.663076 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cilium-run\") pod \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " Mar 17 18:22:06.663291 kubelet[2675]: I0317 18:22:06.663111 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-lib-modules\") pod \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " Mar 17 18:22:06.663291 kubelet[2675]: I0317 18:22:06.663144 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cilium-cgroup\") pod \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " Mar 17 18:22:06.663291 kubelet[2675]: I0317 18:22:06.663182 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-host-proc-sys-net\") pod \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " Mar 17 18:22:06.663291 kubelet[2675]: I0317 18:22:06.663217 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cni-path\") pod \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " Mar 17 18:22:06.663624 kubelet[2675]: I0317 18:22:06.663251 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-host-proc-sys-kernel\") pod \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " Mar 17 18:22:06.663624 kubelet[2675]: I0317 18:22:06.663288 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-hostproc\") pod \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " Mar 17 18:22:06.663624 kubelet[2675]: I0317 18:22:06.663442 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-hostproc" (OuterVolumeSpecName: "hostproc") pod "e4eec020-ed43-49f6-9a7c-7a7fd1545a06" (UID: "e4eec020-ed43-49f6-9a7c-7a7fd1545a06"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:06.663624 kubelet[2675]: I0317 18:22:06.663491 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e4eec020-ed43-49f6-9a7c-7a7fd1545a06" (UID: "e4eec020-ed43-49f6-9a7c-7a7fd1545a06"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:06.663624 kubelet[2675]: I0317 18:22:06.663535 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e4eec020-ed43-49f6-9a7c-7a7fd1545a06" (UID: "e4eec020-ed43-49f6-9a7c-7a7fd1545a06"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:06.664371 kubelet[2675]: I0317 18:22:06.664027 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e4eec020-ed43-49f6-9a7c-7a7fd1545a06" (UID: "e4eec020-ed43-49f6-9a7c-7a7fd1545a06"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:06.664371 kubelet[2675]: I0317 18:22:06.664097 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e4eec020-ed43-49f6-9a7c-7a7fd1545a06" (UID: "e4eec020-ed43-49f6-9a7c-7a7fd1545a06"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:06.664371 kubelet[2675]: I0317 18:22:06.664141 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e4eec020-ed43-49f6-9a7c-7a7fd1545a06" (UID: "e4eec020-ed43-49f6-9a7c-7a7fd1545a06"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:06.664371 kubelet[2675]: I0317 18:22:06.664184 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e4eec020-ed43-49f6-9a7c-7a7fd1545a06" (UID: "e4eec020-ed43-49f6-9a7c-7a7fd1545a06"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:06.664371 kubelet[2675]: I0317 18:22:06.664228 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cni-path" (OuterVolumeSpecName: "cni-path") pod "e4eec020-ed43-49f6-9a7c-7a7fd1545a06" (UID: "e4eec020-ed43-49f6-9a7c-7a7fd1545a06"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:06.664710 kubelet[2675]: I0317 18:22:06.664273 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e4eec020-ed43-49f6-9a7c-7a7fd1545a06" (UID: "e4eec020-ed43-49f6-9a7c-7a7fd1545a06"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:06.664710 kubelet[2675]: I0317 18:22:06.664316 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e4eec020-ed43-49f6-9a7c-7a7fd1545a06" (UID: "e4eec020-ed43-49f6-9a7c-7a7fd1545a06"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:06.669608 kubelet[2675]: I0317 18:22:06.669512 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-kube-api-access-rwj52" (OuterVolumeSpecName: "kube-api-access-rwj52") pod "e4eec020-ed43-49f6-9a7c-7a7fd1545a06" (UID: "e4eec020-ed43-49f6-9a7c-7a7fd1545a06"). InnerVolumeSpecName "kube-api-access-rwj52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:22:06.671340 systemd[1]: var-lib-kubelet-pods-e4eec020\x2ded43\x2d49f6\x2d9a7c\x2d7a7fd1545a06-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drwj52.mount: Deactivated successfully. Mar 17 18:22:06.764267 kubelet[2675]: I0317 18:22:06.764218 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-hubble-tls\") pod \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " Mar 17 18:22:06.764595 kubelet[2675]: I0317 18:22:06.764566 2675 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cni-path\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:06.764734 kubelet[2675]: I0317 18:22:06.764708 2675 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-host-proc-sys-kernel\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:06.764916 kubelet[2675]: I0317 18:22:06.764894 2675 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-hostproc\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:06.765094 kubelet[2675]: I0317 18:22:06.765071 2675 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-etc-cni-netd\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:06.765234 kubelet[2675]: I0317 18:22:06.765212 2675 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-xtables-lock\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:06.765389 kubelet[2675]: I0317 18:22:06.765367 2675 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rwj52\" (UniqueName: \"kubernetes.io/projected/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-kube-api-access-rwj52\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:06.765530 kubelet[2675]: I0317 18:22:06.765506 2675 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-bpf-maps\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:06.765668 kubelet[2675]: I0317 18:22:06.765647 2675 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-lib-modules\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:06.765825 kubelet[2675]: I0317 18:22:06.765781 2675 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cilium-run\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:06.766068 kubelet[2675]: I0317 18:22:06.766043 2675 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cilium-cgroup\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:06.766235 kubelet[2675]: I0317 18:22:06.766195 2675 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-host-proc-sys-net\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:06.780546 systemd[1]: var-lib-kubelet-pods-e4eec020\x2ded43\x2d49f6\x2d9a7c\x2d7a7fd1545a06-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:22:06.785317 kubelet[2675]: I0317 18:22:06.785265 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e4eec020-ed43-49f6-9a7c-7a7fd1545a06" (UID: "e4eec020-ed43-49f6-9a7c-7a7fd1545a06"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:22:06.866854 kubelet[2675]: I0317 18:22:06.866766 2675 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-hubble-tls\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:06.957219 kubelet[2675]: E0317 18:22:06.957177 2675 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:22:06.957492 kubelet[2675]: E0317 18:22:06.957468 2675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cilium-config-path podName:e4eec020-ed43-49f6-9a7c-7a7fd1545a06 nodeName:}" failed. No retries permitted until 2025-03-17 18:22:07.457441959 +0000 UTC m=+160.864812287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cilium-config-path") pod "cilium-s56xm" (UID: "e4eec020-ed43-49f6-9a7c-7a7fd1545a06") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:22:06.957658 kubelet[2675]: E0317 18:22:06.957183 2675 secret.go:188] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Mar 17 18:22:06.957870 kubelet[2675]: E0317 18:22:06.957845 2675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cilium-ipsec-secrets podName:e4eec020-ed43-49f6-9a7c-7a7fd1545a06 nodeName:}" failed. No retries permitted until 2025-03-17 18:22:07.457787594 +0000 UTC m=+160.865157946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cilium-ipsec-secrets") pod "cilium-s56xm" (UID: "e4eec020-ed43-49f6-9a7c-7a7fd1545a06") : failed to sync secret cache: timed out waiting for the condition Mar 17 18:22:06.968080 kubelet[2675]: I0317 18:22:06.968021 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-clustermesh-secrets\") pod \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " Mar 17 18:22:06.973200 kubelet[2675]: I0317 18:22:06.973129 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e4eec020-ed43-49f6-9a7c-7a7fd1545a06" (UID: "e4eec020-ed43-49f6-9a7c-7a7fd1545a06"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:22:07.003746 systemd[1]: var-lib-kubelet-pods-e4eec020\x2ded43\x2d49f6\x2d9a7c\x2d7a7fd1545a06-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:22:07.068681 kubelet[2675]: I0317 18:22:07.068617 2675 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-clustermesh-secrets\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:07.128940 kubelet[2675]: E0317 18:22:07.128894 2675 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:22:07.571960 kubelet[2675]: I0317 18:22:07.571884 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cilium-ipsec-secrets\") pod \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " Mar 17 18:22:07.571960 kubelet[2675]: I0317 18:22:07.571963 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cilium-config-path\") pod \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\" (UID: \"e4eec020-ed43-49f6-9a7c-7a7fd1545a06\") " Mar 17 18:22:07.577048 kubelet[2675]: I0317 18:22:07.576971 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e4eec020-ed43-49f6-9a7c-7a7fd1545a06" (UID: "e4eec020-ed43-49f6-9a7c-7a7fd1545a06"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:22:07.580772 systemd[1]: var-lib-kubelet-pods-e4eec020\x2ded43\x2d49f6\x2d9a7c\x2d7a7fd1545a06-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:22:07.583290 kubelet[2675]: I0317 18:22:07.583220 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e4eec020-ed43-49f6-9a7c-7a7fd1545a06" (UID: "e4eec020-ed43-49f6-9a7c-7a7fd1545a06"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:22:07.672741 kubelet[2675]: I0317 18:22:07.672679 2675 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cilium-ipsec-secrets\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:07.672741 kubelet[2675]: I0317 18:22:07.672738 2675 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4eec020-ed43-49f6-9a7c-7a7fd1545a06-cilium-config-path\") on node \"ip-172-31-23-140\" DevicePath \"\"" Mar 17 18:22:07.809181 systemd[1]: Removed slice kubepods-burstable-pode4eec020_ed43_49f6_9a7c_7a7fd1545a06.slice. Mar 17 18:22:07.901393 systemd[1]: Created slice kubepods-burstable-podb0d4dd19_8e67_47a7_9fd4_97157f960da3.slice. Mar 17 18:22:07.976057 kubelet[2675]: I0317 18:22:07.976013 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0d4dd19-8e67-47a7-9fd4-97157f960da3-hostproc\") pod \"cilium-jdtx8\" (UID: \"b0d4dd19-8e67-47a7-9fd4-97157f960da3\") " pod="kube-system/cilium-jdtx8" Mar 17 18:22:07.976325 kubelet[2675]: I0317 18:22:07.976294 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b0d4dd19-8e67-47a7-9fd4-97157f960da3-cilium-ipsec-secrets\") pod \"cilium-jdtx8\" (UID: \"b0d4dd19-8e67-47a7-9fd4-97157f960da3\") " pod="kube-system/cilium-jdtx8" Mar 17 18:22:07.976467 kubelet[2675]: I0317 18:22:07.976442 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0d4dd19-8e67-47a7-9fd4-97157f960da3-host-proc-sys-kernel\") pod \"cilium-jdtx8\" (UID: \"b0d4dd19-8e67-47a7-9fd4-97157f960da3\") " pod="kube-system/cilium-jdtx8" Mar 17 18:22:07.976676 kubelet[2675]: I0317 18:22:07.976629 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0d4dd19-8e67-47a7-9fd4-97157f960da3-cni-path\") pod \"cilium-jdtx8\" (UID: \"b0d4dd19-8e67-47a7-9fd4-97157f960da3\") " pod="kube-system/cilium-jdtx8" Mar 17 18:22:07.976754 kubelet[2675]: I0317 18:22:07.976682 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0d4dd19-8e67-47a7-9fd4-97157f960da3-host-proc-sys-net\") pod \"cilium-jdtx8\" (UID: \"b0d4dd19-8e67-47a7-9fd4-97157f960da3\") " pod="kube-system/cilium-jdtx8" Mar 17 18:22:07.976754 kubelet[2675]: I0317 18:22:07.976723 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0d4dd19-8e67-47a7-9fd4-97157f960da3-cilium-cgroup\") pod \"cilium-jdtx8\" (UID: \"b0d4dd19-8e67-47a7-9fd4-97157f960da3\") " pod="kube-system/cilium-jdtx8" Mar 17 18:22:07.976927 kubelet[2675]: I0317 18:22:07.976760 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0d4dd19-8e67-47a7-9fd4-97157f960da3-lib-modules\") pod \"cilium-jdtx8\" (UID: \"b0d4dd19-8e67-47a7-9fd4-97157f960da3\") " pod="kube-system/cilium-jdtx8" Mar 17 18:22:07.976927 kubelet[2675]: I0317 18:22:07.976833 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0d4dd19-8e67-47a7-9fd4-97157f960da3-etc-cni-netd\") pod \"cilium-jdtx8\" (UID: \"b0d4dd19-8e67-47a7-9fd4-97157f960da3\") " pod="kube-system/cilium-jdtx8" Mar 17 18:22:07.976927 kubelet[2675]: I0317 18:22:07.976875 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwfkx\" (UniqueName: \"kubernetes.io/projected/b0d4dd19-8e67-47a7-9fd4-97157f960da3-kube-api-access-fwfkx\") pod \"cilium-jdtx8\" (UID: \"b0d4dd19-8e67-47a7-9fd4-97157f960da3\") " pod="kube-system/cilium-jdtx8" Mar 17 18:22:07.976927 kubelet[2675]: I0317 18:22:07.976915 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0d4dd19-8e67-47a7-9fd4-97157f960da3-bpf-maps\") pod \"cilium-jdtx8\" (UID: \"b0d4dd19-8e67-47a7-9fd4-97157f960da3\") " pod="kube-system/cilium-jdtx8" Mar 17 18:22:07.977155 kubelet[2675]: I0317 18:22:07.976949 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0d4dd19-8e67-47a7-9fd4-97157f960da3-clustermesh-secrets\") pod \"cilium-jdtx8\" (UID: \"b0d4dd19-8e67-47a7-9fd4-97157f960da3\") " pod="kube-system/cilium-jdtx8" Mar 17 18:22:07.977155 kubelet[2675]: I0317 18:22:07.976984 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0d4dd19-8e67-47a7-9fd4-97157f960da3-cilium-config-path\") pod \"cilium-jdtx8\" (UID: \"b0d4dd19-8e67-47a7-9fd4-97157f960da3\") " pod="kube-system/cilium-jdtx8" Mar 17 18:22:07.977155 kubelet[2675]: I0317 18:22:07.977024 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0d4dd19-8e67-47a7-9fd4-97157f960da3-cilium-run\") pod \"cilium-jdtx8\" (UID: \"b0d4dd19-8e67-47a7-9fd4-97157f960da3\") " pod="kube-system/cilium-jdtx8" Mar 17 18:22:07.977155 kubelet[2675]: I0317 18:22:07.977056 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0d4dd19-8e67-47a7-9fd4-97157f960da3-xtables-lock\") pod \"cilium-jdtx8\" (UID: \"b0d4dd19-8e67-47a7-9fd4-97157f960da3\") " pod="kube-system/cilium-jdtx8" Mar 17 18:22:07.977155 kubelet[2675]: I0317 18:22:07.977089 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0d4dd19-8e67-47a7-9fd4-97157f960da3-hubble-tls\") pod \"cilium-jdtx8\" (UID: \"b0d4dd19-8e67-47a7-9fd4-97157f960da3\") " pod="kube-system/cilium-jdtx8" Mar 17 18:22:08.206934 env[1819]: time="2025-03-17T18:22:08.206862602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jdtx8,Uid:b0d4dd19-8e67-47a7-9fd4-97157f960da3,Namespace:kube-system,Attempt:0,}" Mar 17 18:22:08.235072 env[1819]: time="2025-03-17T18:22:08.234895800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:22:08.235072 env[1819]: time="2025-03-17T18:22:08.234991188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:22:08.235575 env[1819]: time="2025-03-17T18:22:08.235464778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:22:08.236180 env[1819]: time="2025-03-17T18:22:08.236071124Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa425e305ac7c8c7cf1d76851005e712dea3bb6850c77b73e7b75270b186786f pid=4791 runtime=io.containerd.runc.v2 Mar 17 18:22:08.259223 systemd[1]: Started cri-containerd-aa425e305ac7c8c7cf1d76851005e712dea3bb6850c77b73e7b75270b186786f.scope. Mar 17 18:22:08.317831 env[1819]: time="2025-03-17T18:22:08.317712563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jdtx8,Uid:b0d4dd19-8e67-47a7-9fd4-97157f960da3,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa425e305ac7c8c7cf1d76851005e712dea3bb6850c77b73e7b75270b186786f\"" Mar 17 18:22:08.324466 env[1819]: time="2025-03-17T18:22:08.324395901Z" level=info msg="CreateContainer within sandbox \"aa425e305ac7c8c7cf1d76851005e712dea3bb6850c77b73e7b75270b186786f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:22:08.358828 env[1819]: time="2025-03-17T18:22:08.358736430Z" level=info msg="CreateContainer within sandbox \"aa425e305ac7c8c7cf1d76851005e712dea3bb6850c77b73e7b75270b186786f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8f5e6b374821cf2efb8a068371bc2ec02193467298fa0c556c9dfc4af9d933c0\"" Mar 17 18:22:08.362126 env[1819]: time="2025-03-17T18:22:08.362072645Z" level=info msg="StartContainer for \"8f5e6b374821cf2efb8a068371bc2ec02193467298fa0c556c9dfc4af9d933c0\"" Mar 17 18:22:08.400285 systemd[1]: Started cri-containerd-8f5e6b374821cf2efb8a068371bc2ec02193467298fa0c556c9dfc4af9d933c0.scope. Mar 17 18:22:08.460161 env[1819]: time="2025-03-17T18:22:08.459536706Z" level=info msg="StartContainer for \"8f5e6b374821cf2efb8a068371bc2ec02193467298fa0c556c9dfc4af9d933c0\" returns successfully" Mar 17 18:22:08.476666 systemd[1]: cri-containerd-8f5e6b374821cf2efb8a068371bc2ec02193467298fa0c556c9dfc4af9d933c0.scope: Deactivated successfully. Mar 17 18:22:08.540070 env[1819]: time="2025-03-17T18:22:08.540005258Z" level=info msg="shim disconnected" id=8f5e6b374821cf2efb8a068371bc2ec02193467298fa0c556c9dfc4af9d933c0 Mar 17 18:22:08.540598 env[1819]: time="2025-03-17T18:22:08.540545771Z" level=warning msg="cleaning up after shim disconnected" id=8f5e6b374821cf2efb8a068371bc2ec02193467298fa0c556c9dfc4af9d933c0 namespace=k8s.io Mar 17 18:22:08.540729 env[1819]: time="2025-03-17T18:22:08.540701435Z" level=info msg="cleaning up dead shim" Mar 17 18:22:08.562009 env[1819]: time="2025-03-17T18:22:08.561925043Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4873 runtime=io.containerd.runc.v2\n" Mar 17 18:22:08.917269 kubelet[2675]: I0317 18:22:08.916832 2675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4eec020-ed43-49f6-9a7c-7a7fd1545a06" path="/var/lib/kubelet/pods/e4eec020-ed43-49f6-9a7c-7a7fd1545a06/volumes" Mar 17 18:22:09.524633 env[1819]: time="2025-03-17T18:22:09.524568025Z" level=info msg="CreateContainer within sandbox \"aa425e305ac7c8c7cf1d76851005e712dea3bb6850c77b73e7b75270b186786f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:22:09.548661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount89734912.mount: Deactivated successfully. Mar 17 18:22:09.562737 env[1819]: time="2025-03-17T18:22:09.562630867Z" level=info msg="CreateContainer within sandbox \"aa425e305ac7c8c7cf1d76851005e712dea3bb6850c77b73e7b75270b186786f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f583e2905d61f66af27db3e3a477a14c61a82ad414ab1ab09f315934a8c1317d\"" Mar 17 18:22:09.564818 env[1819]: time="2025-03-17T18:22:09.564700990Z" level=info msg="StartContainer for \"f583e2905d61f66af27db3e3a477a14c61a82ad414ab1ab09f315934a8c1317d\"" Mar 17 18:22:09.617205 systemd[1]: Started cri-containerd-f583e2905d61f66af27db3e3a477a14c61a82ad414ab1ab09f315934a8c1317d.scope. Mar 17 18:22:09.702955 env[1819]: time="2025-03-17T18:22:09.702891123Z" level=info msg="StartContainer for \"f583e2905d61f66af27db3e3a477a14c61a82ad414ab1ab09f315934a8c1317d\" returns successfully" Mar 17 18:22:09.713713 systemd[1]: cri-containerd-f583e2905d61f66af27db3e3a477a14c61a82ad414ab1ab09f315934a8c1317d.scope: Deactivated successfully. Mar 17 18:22:09.754841 env[1819]: time="2025-03-17T18:22:09.754759718Z" level=info msg="shim disconnected" id=f583e2905d61f66af27db3e3a477a14c61a82ad414ab1ab09f315934a8c1317d Mar 17 18:22:09.755309 env[1819]: time="2025-03-17T18:22:09.755269248Z" level=warning msg="cleaning up after shim disconnected" id=f583e2905d61f66af27db3e3a477a14c61a82ad414ab1ab09f315934a8c1317d namespace=k8s.io Mar 17 18:22:09.755436 env[1819]: time="2025-03-17T18:22:09.755408387Z" level=info msg="cleaning up dead shim" Mar 17 18:22:09.770721 env[1819]: time="2025-03-17T18:22:09.770660159Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4937 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T18:22:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Mar 17 18:22:10.088025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f583e2905d61f66af27db3e3a477a14c61a82ad414ab1ab09f315934a8c1317d-rootfs.mount: Deactivated successfully. Mar 17 18:22:10.227842 kubelet[2675]: I0317 18:22:10.227657 2675 setters.go:600] "Node became not ready" node="ip-172-31-23-140" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:22:10Z","lastTransitionTime":"2025-03-17T18:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:22:10.519215 env[1819]: time="2025-03-17T18:22:10.519160179Z" level=info msg="CreateContainer within sandbox \"aa425e305ac7c8c7cf1d76851005e712dea3bb6850c77b73e7b75270b186786f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:22:10.558768 env[1819]: time="2025-03-17T18:22:10.558684421Z" level=info msg="CreateContainer within sandbox \"aa425e305ac7c8c7cf1d76851005e712dea3bb6850c77b73e7b75270b186786f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"772f0e682125f2711f188464fa2fbfd6c17af1a3d53415aa9ff92dc06aa99d5f\"" Mar 17 18:22:10.562849 env[1819]: time="2025-03-17T18:22:10.561492830Z" level=info msg="StartContainer for \"772f0e682125f2711f188464fa2fbfd6c17af1a3d53415aa9ff92dc06aa99d5f\"" Mar 17 18:22:10.625001 systemd[1]: Started cri-containerd-772f0e682125f2711f188464fa2fbfd6c17af1a3d53415aa9ff92dc06aa99d5f.scope. Mar 17 18:22:10.717031 env[1819]: time="2025-03-17T18:22:10.716961282Z" level=info msg="StartContainer for \"772f0e682125f2711f188464fa2fbfd6c17af1a3d53415aa9ff92dc06aa99d5f\" returns successfully" Mar 17 18:22:10.729088 systemd[1]: cri-containerd-772f0e682125f2711f188464fa2fbfd6c17af1a3d53415aa9ff92dc06aa99d5f.scope: Deactivated successfully. Mar 17 18:22:10.775957 env[1819]: time="2025-03-17T18:22:10.775276009Z" level=info msg="shim disconnected" id=772f0e682125f2711f188464fa2fbfd6c17af1a3d53415aa9ff92dc06aa99d5f Mar 17 18:22:10.776333 env[1819]: time="2025-03-17T18:22:10.776274441Z" level=warning msg="cleaning up after shim disconnected" id=772f0e682125f2711f188464fa2fbfd6c17af1a3d53415aa9ff92dc06aa99d5f namespace=k8s.io Mar 17 18:22:10.776483 env[1819]: time="2025-03-17T18:22:10.776453312Z" level=info msg="cleaning up dead shim" Mar 17 18:22:10.791393 env[1819]: time="2025-03-17T18:22:10.791243625Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4998 runtime=io.containerd.runc.v2\n" Mar 17 18:22:11.088143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-772f0e682125f2711f188464fa2fbfd6c17af1a3d53415aa9ff92dc06aa99d5f-rootfs.mount: Deactivated successfully. Mar 17 18:22:11.528840 env[1819]: time="2025-03-17T18:22:11.528746351Z" level=info msg="CreateContainer within sandbox \"aa425e305ac7c8c7cf1d76851005e712dea3bb6850c77b73e7b75270b186786f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:22:11.558145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount855160277.mount: Deactivated successfully. Mar 17 18:22:11.577950 env[1819]: time="2025-03-17T18:22:11.577844945Z" level=info msg="CreateContainer within sandbox \"aa425e305ac7c8c7cf1d76851005e712dea3bb6850c77b73e7b75270b186786f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"81c002525a7c10ab5f23eee0c07e40027b362b9be7a7d88c1c39d919ec995a40\"" Mar 17 18:22:11.579032 env[1819]: time="2025-03-17T18:22:11.578975029Z" level=info msg="StartContainer for \"81c002525a7c10ab5f23eee0c07e40027b362b9be7a7d88c1c39d919ec995a40\"" Mar 17 18:22:11.611988 systemd[1]: Started cri-containerd-81c002525a7c10ab5f23eee0c07e40027b362b9be7a7d88c1c39d919ec995a40.scope. Mar 17 18:22:11.675412 systemd[1]: cri-containerd-81c002525a7c10ab5f23eee0c07e40027b362b9be7a7d88c1c39d919ec995a40.scope: Deactivated successfully. Mar 17 18:22:11.676909 env[1819]: time="2025-03-17T18:22:11.675730436Z" level=info msg="StartContainer for \"81c002525a7c10ab5f23eee0c07e40027b362b9be7a7d88c1c39d919ec995a40\" returns successfully" Mar 17 18:22:11.728402 env[1819]: time="2025-03-17T18:22:11.728339724Z" level=info msg="shim disconnected" id=81c002525a7c10ab5f23eee0c07e40027b362b9be7a7d88c1c39d919ec995a40 Mar 17 18:22:11.728932 env[1819]: time="2025-03-17T18:22:11.728883166Z" level=warning msg="cleaning up after shim disconnected" id=81c002525a7c10ab5f23eee0c07e40027b362b9be7a7d88c1c39d919ec995a40 namespace=k8s.io Mar 17 18:22:11.729098 env[1819]: time="2025-03-17T18:22:11.729068637Z" level=info msg="cleaning up dead shim" Mar 17 18:22:11.743729 env[1819]: time="2025-03-17T18:22:11.743672495Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5053 runtime=io.containerd.runc.v2\n" Mar 17 18:22:12.130828 kubelet[2675]: E0317 18:22:12.130736 2675 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:22:12.532352 env[1819]: time="2025-03-17T18:22:12.532295437Z" level=info msg="CreateContainer within sandbox \"aa425e305ac7c8c7cf1d76851005e712dea3bb6850c77b73e7b75270b186786f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:22:12.587070 env[1819]: time="2025-03-17T18:22:12.586992992Z" level=info msg="CreateContainer within sandbox \"aa425e305ac7c8c7cf1d76851005e712dea3bb6850c77b73e7b75270b186786f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9ce818949f02a751c3ee66008f872ce35fffc238e68ddfe1d365c7990362c45f\"" Mar 17 18:22:12.588636 env[1819]: time="2025-03-17T18:22:12.588586705Z" level=info msg="StartContainer for \"9ce818949f02a751c3ee66008f872ce35fffc238e68ddfe1d365c7990362c45f\"" Mar 17 18:22:12.629657 systemd[1]: Started cri-containerd-9ce818949f02a751c3ee66008f872ce35fffc238e68ddfe1d365c7990362c45f.scope. Mar 17 18:22:12.729912 env[1819]: time="2025-03-17T18:22:12.729830981Z" level=info msg="StartContainer for \"9ce818949f02a751c3ee66008f872ce35fffc238e68ddfe1d365c7990362c45f\" returns successfully" Mar 17 18:22:13.091245 systemd[1]: run-containerd-runc-k8s.io-9ce818949f02a751c3ee66008f872ce35fffc238e68ddfe1d365c7990362c45f-runc.1xXxeX.mount: Deactivated successfully. Mar 17 18:22:13.573291 kubelet[2675]: I0317 18:22:13.573194 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jdtx8" podStartSLOduration=6.573171027 podStartE2EDuration="6.573171027s" podCreationTimestamp="2025-03-17 18:22:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:22:13.567895944 +0000 UTC m=+166.975266284" watchObservedRunningTime="2025-03-17 18:22:13.573171027 +0000 UTC m=+166.980541355" Mar 17 18:22:13.651953 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Mar 17 18:22:17.926190 systemd-networkd[1534]: lxc_health: Link UP Mar 17 18:22:17.953075 systemd-networkd[1534]: lxc_health: Gained carrier Mar 17 18:22:17.953654 (udev-worker)[5620]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:22:17.954220 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:22:19.907949 systemd-networkd[1534]: lxc_health: Gained IPv6LL Mar 17 18:22:23.444633 update_engine[1805]: I0317 18:22:23.442991 1805 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 17 18:22:23.444633 update_engine[1805]: I0317 18:22:23.443049 1805 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 17 18:22:23.444633 update_engine[1805]: I0317 18:22:23.443323 1805 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 17 18:22:23.444633 update_engine[1805]: I0317 18:22:23.444177 1805 omaha_request_params.cc:62] Current group set to lts Mar 17 18:22:23.444633 update_engine[1805]: I0317 18:22:23.444342 1805 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 17 18:22:23.444633 update_engine[1805]: I0317 18:22:23.444356 1805 update_attempter.cc:643] Scheduling an action processor start. Mar 17 18:22:23.444633 update_engine[1805]: I0317 18:22:23.444383 1805 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 18:22:23.444633 update_engine[1805]: I0317 18:22:23.444439 1805 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 17 18:22:23.447223 update_engine[1805]: I0317 18:22:23.447078 1805 omaha_request_action.cc:270] Posting an Omaha request to disabled Mar 17 18:22:23.447223 update_engine[1805]: I0317 18:22:23.447114 1805 omaha_request_action.cc:271] Request: Mar 17 18:22:23.447223 update_engine[1805]: Mar 17 18:22:23.447223 update_engine[1805]: Mar 17 18:22:23.447223 update_engine[1805]: Mar 17 18:22:23.447223 update_engine[1805]: Mar 17 18:22:23.447223 update_engine[1805]: Mar 17 18:22:23.447223 update_engine[1805]: Mar 17 18:22:23.447223 update_engine[1805]: Mar 17 18:22:23.447223 update_engine[1805]: Mar 17 18:22:23.447223 update_engine[1805]: I0317 18:22:23.447125 1805 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:22:23.449875 locksmithd[1870]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 17 18:22:23.489442 update_engine[1805]: I0317 18:22:23.489398 1805 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:22:23.490128 update_engine[1805]: I0317 18:22:23.490094 1805 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:22:23.504000 update_engine[1805]: E0317 18:22:23.503955 1805 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:22:23.504333 update_engine[1805]: I0317 18:22:23.504307 1805 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 17 18:22:24.639647 sshd[4761]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:24.645744 systemd-logind[1804]: Session 28 logged out. Waiting for processes to exit. Mar 17 18:22:24.647948 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 18:22:24.649095 systemd[1]: sshd@27-172.31.23.140:22-139.178.89.65:41212.service: Deactivated successfully. Mar 17 18:22:24.652978 systemd-logind[1804]: Removed session 28. Mar 17 18:22:26.908978 env[1819]: time="2025-03-17T18:22:26.908622675Z" level=info msg="StopPodSandbox for \"839ed5631ef7b2509752c497c88f8006b6a2d28f4d8d400f31806daf98b98e18\"" Mar 17 18:22:26.908978 env[1819]: time="2025-03-17T18:22:26.908791791Z" level=info msg="TearDown network for sandbox \"839ed5631ef7b2509752c497c88f8006b6a2d28f4d8d400f31806daf98b98e18\" successfully" Mar 17 18:22:26.908978 env[1819]: time="2025-03-17T18:22:26.908868038Z" level=info msg="StopPodSandbox for \"839ed5631ef7b2509752c497c88f8006b6a2d28f4d8d400f31806daf98b98e18\" returns successfully" Mar 17 18:22:26.915000 env[1819]: time="2025-03-17T18:22:26.913171448Z" level=info msg="RemovePodSandbox for \"839ed5631ef7b2509752c497c88f8006b6a2d28f4d8d400f31806daf98b98e18\"" Mar 17 18:22:26.915000 env[1819]: time="2025-03-17T18:22:26.913256995Z" level=info msg="Forcibly stopping sandbox \"839ed5631ef7b2509752c497c88f8006b6a2d28f4d8d400f31806daf98b98e18\"" Mar 17 18:22:26.915000 env[1819]: time="2025-03-17T18:22:26.913442526Z" level=info msg="TearDown network for sandbox \"839ed5631ef7b2509752c497c88f8006b6a2d28f4d8d400f31806daf98b98e18\" successfully" Mar 17 18:22:26.920818 env[1819]: time="2025-03-17T18:22:26.920694347Z" level=info msg="RemovePodSandbox \"839ed5631ef7b2509752c497c88f8006b6a2d28f4d8d400f31806daf98b98e18\" returns successfully" Mar 17 18:22:26.921929 env[1819]: time="2025-03-17T18:22:26.921858210Z" level=info msg="StopPodSandbox for \"4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685\"" Mar 17 18:22:26.922094 env[1819]: time="2025-03-17T18:22:26.922016513Z" level=info msg="TearDown network for sandbox \"4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685\" successfully" Mar 17 18:22:26.922094 env[1819]: time="2025-03-17T18:22:26.922080305Z" level=info msg="StopPodSandbox for \"4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685\" returns successfully" Mar 17 18:22:26.922979 env[1819]: time="2025-03-17T18:22:26.922933153Z" level=info msg="RemovePodSandbox for \"4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685\"" Mar 17 18:22:26.923279 env[1819]: time="2025-03-17T18:22:26.923190288Z" level=info msg="Forcibly stopping sandbox \"4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685\"" Mar 17 18:22:26.923548 env[1819]: time="2025-03-17T18:22:26.923508982Z" level=info msg="TearDown network for sandbox \"4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685\" successfully" Mar 17 18:22:26.931046 env[1819]: time="2025-03-17T18:22:26.930960434Z" level=info msg="RemovePodSandbox \"4255913ea9f7fb8762191be804b1a574e9a4f606ebfd0e3d7e763a6eed3da685\" returns successfully" Mar 17 18:22:33.445203 update_engine[1805]: I0317 18:22:33.445010 1805 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:22:33.445738 update_engine[1805]: I0317 18:22:33.445384 1805 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:22:33.445738 update_engine[1805]: I0317 18:22:33.445658 1805 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:22:33.446322 update_engine[1805]: E0317 18:22:33.446273 1805 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:22:33.446440 update_engine[1805]: I0317 18:22:33.446415 1805 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 17 18:22:38.234823 systemd[1]: cri-containerd-f55ffd714f460eb0049b7b07baf2c81fdd9a875f5eecf8b4230ece918b99f2ae.scope: Deactivated successfully. Mar 17 18:22:38.235408 systemd[1]: cri-containerd-f55ffd714f460eb0049b7b07baf2c81fdd9a875f5eecf8b4230ece918b99f2ae.scope: Consumed 5.217s CPU time. Mar 17 18:22:38.276286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f55ffd714f460eb0049b7b07baf2c81fdd9a875f5eecf8b4230ece918b99f2ae-rootfs.mount: Deactivated successfully. Mar 17 18:22:38.293901 env[1819]: time="2025-03-17T18:22:38.293835303Z" level=info msg="shim disconnected" id=f55ffd714f460eb0049b7b07baf2c81fdd9a875f5eecf8b4230ece918b99f2ae Mar 17 18:22:38.294689 env[1819]: time="2025-03-17T18:22:38.294652695Z" level=warning msg="cleaning up after shim disconnected" id=f55ffd714f460eb0049b7b07baf2c81fdd9a875f5eecf8b4230ece918b99f2ae namespace=k8s.io Mar 17 18:22:38.294850 env[1819]: time="2025-03-17T18:22:38.294821765Z" level=info msg="cleaning up dead shim" Mar 17 18:22:38.308151 env[1819]: time="2025-03-17T18:22:38.308095048Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5732 runtime=io.containerd.runc.v2\n" Mar 17 18:22:38.603724 kubelet[2675]: I0317 18:22:38.603226 2675 scope.go:117] "RemoveContainer" containerID="f55ffd714f460eb0049b7b07baf2c81fdd9a875f5eecf8b4230ece918b99f2ae" Mar 17 18:22:38.607992 env[1819]: time="2025-03-17T18:22:38.607932818Z" level=info msg="CreateContainer within sandbox \"a8cd4a578b25263dc8c783fd01e7e0582d06f713a3da84f7d0ca2a39cbe97935\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 17 18:22:38.631198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount136585572.mount: Deactivated successfully. Mar 17 18:22:38.645917 env[1819]: time="2025-03-17T18:22:38.645829025Z" level=info msg="CreateContainer within sandbox \"a8cd4a578b25263dc8c783fd01e7e0582d06f713a3da84f7d0ca2a39cbe97935\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"31951219ae1940b2366fd4ddcce9f2c5fa262b25f48e4d56d43650ac7ac34abf\"" Mar 17 18:22:38.647080 env[1819]: time="2025-03-17T18:22:38.647035967Z" level=info msg="StartContainer for \"31951219ae1940b2366fd4ddcce9f2c5fa262b25f48e4d56d43650ac7ac34abf\"" Mar 17 18:22:38.677730 systemd[1]: Started cri-containerd-31951219ae1940b2366fd4ddcce9f2c5fa262b25f48e4d56d43650ac7ac34abf.scope. Mar 17 18:22:38.768257 env[1819]: time="2025-03-17T18:22:38.768119417Z" level=info msg="StartContainer for \"31951219ae1940b2366fd4ddcce9f2c5fa262b25f48e4d56d43650ac7ac34abf\" returns successfully" Mar 17 18:22:41.007456 kubelet[2675]: E0317 18:22:41.007383 2675 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-140?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 18:22:43.440827 update_engine[1805]: I0317 18:22:43.440755 1805 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:22:43.441424 update_engine[1805]: I0317 18:22:43.441090 1805 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:22:43.441424 update_engine[1805]: I0317 18:22:43.441337 1805 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:22:43.441861 update_engine[1805]: E0317 18:22:43.441833 1805 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:22:43.441997 update_engine[1805]: I0317 18:22:43.441960 1805 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 17 18:22:43.464030 systemd[1]: cri-containerd-86771b1f95b849da3e0eef0867f8b0c06bf23465905f0b1f6e9752be85758e62.scope: Deactivated successfully. Mar 17 18:22:43.464592 systemd[1]: cri-containerd-86771b1f95b849da3e0eef0867f8b0c06bf23465905f0b1f6e9752be85758e62.scope: Consumed 2.495s CPU time. Mar 17 18:22:43.500967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86771b1f95b849da3e0eef0867f8b0c06bf23465905f0b1f6e9752be85758e62-rootfs.mount: Deactivated successfully. Mar 17 18:22:43.517250 env[1819]: time="2025-03-17T18:22:43.517187001Z" level=info msg="shim disconnected" id=86771b1f95b849da3e0eef0867f8b0c06bf23465905f0b1f6e9752be85758e62 Mar 17 18:22:43.518022 env[1819]: time="2025-03-17T18:22:43.517974403Z" level=warning msg="cleaning up after shim disconnected" id=86771b1f95b849da3e0eef0867f8b0c06bf23465905f0b1f6e9752be85758e62 namespace=k8s.io Mar 17 18:22:43.518111 env[1819]: time="2025-03-17T18:22:43.518025728Z" level=info msg="cleaning up dead shim" Mar 17 18:22:43.532783 env[1819]: time="2025-03-17T18:22:43.532576397Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5792 runtime=io.containerd.runc.v2\n" Mar 17 18:22:43.620968 kubelet[2675]: I0317 18:22:43.620908 2675 scope.go:117] "RemoveContainer" containerID="86771b1f95b849da3e0eef0867f8b0c06bf23465905f0b1f6e9752be85758e62" Mar 17 18:22:43.624853 env[1819]: time="2025-03-17T18:22:43.624734562Z" level=info msg="CreateContainer within sandbox \"de162a6376c745db837221a3b18593df0b0fe5dcae7e5de639f23eb8bf2d7744\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 17 18:22:43.649953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3240483217.mount: Deactivated successfully. Mar 17 18:22:43.663174 env[1819]: time="2025-03-17T18:22:43.663108506Z" level=info msg="CreateContainer within sandbox \"de162a6376c745db837221a3b18593df0b0fe5dcae7e5de639f23eb8bf2d7744\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"0893c56dc81c2ceeda336290d62fdd4ac3dcee01dd00685fd1419286ad984bbd\"" Mar 17 18:22:43.664279 env[1819]: time="2025-03-17T18:22:43.664229788Z" level=info msg="StartContainer for \"0893c56dc81c2ceeda336290d62fdd4ac3dcee01dd00685fd1419286ad984bbd\"" Mar 17 18:22:43.707514 systemd[1]: Started cri-containerd-0893c56dc81c2ceeda336290d62fdd4ac3dcee01dd00685fd1419286ad984bbd.scope. Mar 17 18:22:43.782395 env[1819]: time="2025-03-17T18:22:43.782265846Z" level=info msg="StartContainer for \"0893c56dc81c2ceeda336290d62fdd4ac3dcee01dd00685fd1419286ad984bbd\" returns successfully" Mar 17 18:22:51.007930 kubelet[2675]: E0317 18:22:51.007873 2675 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-140?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 18:22:53.438174 update_engine[1805]: I0317 18:22:53.438102 1805 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:22:53.438787 update_engine[1805]: I0317 18:22:53.438431 1805 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:22:53.438787 update_engine[1805]: I0317 18:22:53.438681 1805 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:22:53.440875 update_engine[1805]: E0317 18:22:53.439610 1805 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:22:53.440875 update_engine[1805]: I0317 18:22:53.439740 1805 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 18:22:53.440875 update_engine[1805]: I0317 18:22:53.439754 1805 omaha_request_action.cc:621] Omaha request response: Mar 17 18:22:53.440875 update_engine[1805]: E0317 18:22:53.439893 1805 omaha_request_action.cc:640] Omaha request network transfer failed. Mar 17 18:22:53.440875 update_engine[1805]: I0317 18:22:53.439918 1805 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 17 18:22:53.440875 update_engine[1805]: I0317 18:22:53.439929 1805 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 18:22:53.440875 update_engine[1805]: I0317 18:22:53.439937 1805 update_attempter.cc:306] Processing Done. Mar 17 18:22:53.440875 update_engine[1805]: E0317 18:22:53.439956 1805 update_attempter.cc:619] Update failed. Mar 17 18:22:53.440875 update_engine[1805]: I0317 18:22:53.439965 1805 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 17 18:22:53.440875 update_engine[1805]: I0317 18:22:53.439973 1805 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 17 18:22:53.440875 update_engine[1805]: I0317 18:22:53.439982 1805 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 17 18:22:53.440875 update_engine[1805]: I0317 18:22:53.440081 1805 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 18:22:53.440875 update_engine[1805]: I0317 18:22:53.440114 1805 omaha_request_action.cc:270] Posting an Omaha request to disabled Mar 17 18:22:53.440875 update_engine[1805]: I0317 18:22:53.440124 1805 omaha_request_action.cc:271] Request: Mar 17 18:22:53.440875 update_engine[1805]: Mar 17 18:22:53.440875 update_engine[1805]: Mar 17 18:22:53.440875 update_engine[1805]: Mar 17 18:22:53.441953 update_engine[1805]: Mar 17 18:22:53.441953 update_engine[1805]: Mar 17 18:22:53.441953 update_engine[1805]: Mar 17 18:22:53.441953 update_engine[1805]: I0317 18:22:53.440133 1805 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:22:53.441953 update_engine[1805]: I0317 18:22:53.440361 1805 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:22:53.441953 update_engine[1805]: I0317 18:22:53.440598 1805 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:22:53.441953 update_engine[1805]: E0317 18:22:53.441140 1805 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:22:53.441953 update_engine[1805]: I0317 18:22:53.441255 1805 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 18:22:53.441953 update_engine[1805]: I0317 18:22:53.441267 1805 omaha_request_action.cc:621] Omaha request response: Mar 17 18:22:53.441953 update_engine[1805]: I0317 18:22:53.441277 1805 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 18:22:53.441953 update_engine[1805]: I0317 18:22:53.441287 1805 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 18:22:53.441953 update_engine[1805]: I0317 18:22:53.441295 1805 update_attempter.cc:306] Processing Done. Mar 17 18:22:53.441953 update_engine[1805]: I0317 18:22:53.441305 1805 update_attempter.cc:310] Error event sent. Mar 17 18:22:53.441953 update_engine[1805]: I0317 18:22:53.441319 1805 update_check_scheduler.cc:74] Next update check in 47m31s Mar 17 18:22:53.442674 locksmithd[1870]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 17 18:22:53.442674 locksmithd[1870]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0