Jul 2 00:43:30.939574 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 2 00:43:30.939613 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Jul 1 23:37:37 -00 2024 Jul 2 00:43:30.943303 kernel: efi: EFI v2.70 by EDK II Jul 2 00:43:30.943328 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x7173cf98 Jul 2 00:43:30.943343 kernel: ACPI: Early table checksum verification disabled Jul 2 00:43:30.943357 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 2 00:43:30.943373 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 2 00:43:30.943387 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 2 00:43:30.943401 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 2 00:43:30.943415 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 2 00:43:30.943434 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 2 00:43:30.943448 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 2 00:43:30.943470 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 2 00:43:30.943487 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 2 00:43:30.943504 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 2 00:43:30.943524 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 2 00:43:30.943539 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 2 00:43:30.943554 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 2 00:43:30.943569 kernel: printk: bootconsole [uart0] enabled Jul 2 00:43:30.943584 kernel: NUMA: Failed to initialise from firmware Jul 2 00:43:30.943599 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 00:43:30.943614 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Jul 2 00:43:30.943628 kernel: Zone ranges: Jul 2 00:43:30.943643 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 2 00:43:30.943657 kernel: DMA32 empty Jul 2 00:43:30.943672 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 2 00:43:30.943690 kernel: Movable zone start for each node Jul 2 00:43:30.943705 kernel: Early memory node ranges Jul 2 00:43:30.943720 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 2 00:43:30.943734 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 2 00:43:30.943749 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 2 00:43:30.943763 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 2 00:43:30.943777 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 2 00:43:30.943792 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 2 00:43:30.943806 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 2 00:43:30.943821 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 2 00:43:30.943835 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 00:43:30.943850 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 2 00:43:30.943869 kernel: psci: probing for conduit method from ACPI. Jul 2 00:43:30.943884 kernel: psci: PSCIv1.0 detected in firmware. Jul 2 00:43:30.943904 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 00:43:30.943921 kernel: psci: Trusted OS migration not required Jul 2 00:43:30.943936 kernel: psci: SMC Calling Convention v1.1 Jul 2 00:43:30.943956 kernel: ACPI: SRAT not present Jul 2 00:43:30.943972 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Jul 2 00:43:30.943987 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Jul 2 00:43:30.944003 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 2 00:43:30.944019 kernel: Detected PIPT I-cache on CPU0 Jul 2 00:43:30.944034 kernel: CPU features: detected: GIC system register CPU interface Jul 2 00:43:30.944050 kernel: CPU features: detected: Spectre-v2 Jul 2 00:43:30.944066 kernel: CPU features: detected: Spectre-v3a Jul 2 00:43:30.944081 kernel: CPU features: detected: Spectre-BHB Jul 2 00:43:30.944097 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 00:43:30.944112 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 00:43:30.944131 kernel: CPU features: detected: ARM erratum 1742098 Jul 2 00:43:30.944147 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 2 00:43:30.944162 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 2 00:43:30.944178 kernel: Policy zone: Normal Jul 2 00:43:30.944196 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7b86ecfcd4701bdf4668db795601b20c118ac0b117c34a9b3836e0a5236b73b0 Jul 2 00:43:30.944258 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:43:30.944277 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:43:30.944294 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:43:30.944309 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:43:30.944325 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 2 00:43:30.944347 kernel: Memory: 3824588K/4030464K available (9792K kernel code, 2092K rwdata, 7572K rodata, 36352K init, 777K bss, 205876K reserved, 0K cma-reserved) Jul 2 00:43:30.944363 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:43:30.944379 kernel: trace event string verifier disabled Jul 2 00:43:30.944395 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:43:30.944411 kernel: rcu: RCU event tracing is enabled. Jul 2 00:43:30.944427 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:43:30.944443 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:43:30.944459 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:43:30.944475 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:43:30.944490 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:43:30.944505 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 00:43:30.944520 kernel: GICv3: 96 SPIs implemented Jul 2 00:43:30.944539 kernel: GICv3: 0 Extended SPIs implemented Jul 2 00:43:30.944555 kernel: GICv3: Distributor has no Range Selector support Jul 2 00:43:30.944570 kernel: Root IRQ handler: gic_handle_irq Jul 2 00:43:30.944585 kernel: GICv3: 16 PPIs implemented Jul 2 00:43:30.944600 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 2 00:43:30.944615 kernel: ACPI: SRAT not present Jul 2 00:43:30.944630 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 2 00:43:30.944645 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 00:43:30.944661 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Jul 2 00:43:30.944676 kernel: GICv3: using LPI property table @0x00000004000c0000 Jul 2 00:43:30.944692 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 2 00:43:30.944711 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Jul 2 00:43:30.944726 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 2 00:43:30.944742 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 2 00:43:30.944757 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 2 00:43:30.944773 kernel: Console: colour dummy device 80x25 Jul 2 00:43:30.944789 kernel: printk: console [tty1] enabled Jul 2 00:43:30.944804 kernel: ACPI: Core revision 20210730 Jul 2 00:43:30.944820 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 2 00:43:30.944836 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:43:30.944852 kernel: LSM: Security Framework initializing Jul 2 00:43:30.944871 kernel: SELinux: Initializing. Jul 2 00:43:30.944887 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:43:30.944904 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:43:30.944920 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:43:30.944935 kernel: Platform MSI: ITS@0x10080000 domain created Jul 2 00:43:30.944951 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 2 00:43:30.944966 kernel: Remapping and enabling EFI services. Jul 2 00:43:30.944982 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:43:30.944997 kernel: Detected PIPT I-cache on CPU1 Jul 2 00:43:30.945013 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 2 00:43:30.945033 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Jul 2 00:43:30.945049 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 2 00:43:30.945064 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:43:30.945080 kernel: SMP: Total of 2 processors activated. Jul 2 00:43:30.945096 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 00:43:30.945111 kernel: CPU features: detected: 32-bit EL1 Support Jul 2 00:43:30.945127 kernel: CPU features: detected: CRC32 instructions Jul 2 00:43:30.945142 kernel: CPU: All CPU(s) started at EL1 Jul 2 00:43:30.945158 kernel: alternatives: patching kernel code Jul 2 00:43:30.945178 kernel: devtmpfs: initialized Jul 2 00:43:30.945195 kernel: KASLR disabled due to lack of seed Jul 2 00:43:30.945304 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:43:30.945332 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:43:30.945349 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:43:30.945365 kernel: SMBIOS 3.0.0 present. Jul 2 00:43:30.945381 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 2 00:43:30.945399 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:43:30.945416 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 00:43:30.945432 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 00:43:30.945450 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 00:43:30.945471 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:43:30.945488 kernel: audit: type=2000 audit(0.249:1): state=initialized audit_enabled=0 res=1 Jul 2 00:43:30.945505 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:43:30.945522 kernel: cpuidle: using governor menu Jul 2 00:43:30.945539 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 00:43:30.945559 kernel: ASID allocator initialised with 32768 entries Jul 2 00:43:30.945575 kernel: ACPI: bus type PCI registered Jul 2 00:43:30.945592 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:43:30.945608 kernel: Serial: AMBA PL011 UART driver Jul 2 00:43:30.945624 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:43:30.945641 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 00:43:30.945657 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:43:30.945673 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 00:43:30.945710 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:43:30.945732 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 00:43:30.945749 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:43:30.945765 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:43:30.945782 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:43:30.945798 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:43:30.945815 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 00:43:30.945831 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 00:43:30.945847 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 00:43:30.945863 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:43:30.945883 kernel: ACPI: Interpreter enabled Jul 2 00:43:30.945900 kernel: ACPI: Using GIC for interrupt routing Jul 2 00:43:30.945916 kernel: ACPI: MCFG table detected, 1 entries Jul 2 00:43:30.945932 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 2 00:43:30.946284 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:43:30.946501 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 00:43:30.946702 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 00:43:30.946898 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 2 00:43:30.947100 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 2 00:43:30.947123 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 2 00:43:30.947140 kernel: acpiphp: Slot [1] registered Jul 2 00:43:30.947157 kernel: acpiphp: Slot [2] registered Jul 2 00:43:30.947173 kernel: acpiphp: Slot [3] registered Jul 2 00:43:30.947189 kernel: acpiphp: Slot [4] registered Jul 2 00:43:30.947205 kernel: acpiphp: Slot [5] registered Jul 2 00:43:30.947249 kernel: acpiphp: Slot [6] registered Jul 2 00:43:30.947266 kernel: acpiphp: Slot [7] registered Jul 2 00:43:30.947288 kernel: acpiphp: Slot [8] registered Jul 2 00:43:30.947304 kernel: acpiphp: Slot [9] registered Jul 2 00:43:30.947320 kernel: acpiphp: Slot [10] registered Jul 2 00:43:30.947336 kernel: acpiphp: Slot [11] registered Jul 2 00:43:30.947353 kernel: acpiphp: Slot [12] registered Jul 2 00:43:30.947368 kernel: acpiphp: Slot [13] registered Jul 2 00:43:30.947385 kernel: acpiphp: Slot [14] registered Jul 2 00:43:30.947401 kernel: acpiphp: Slot [15] registered Jul 2 00:43:30.947417 kernel: acpiphp: Slot [16] registered Jul 2 00:43:30.947437 kernel: acpiphp: Slot [17] registered Jul 2 00:43:30.947454 kernel: acpiphp: Slot [18] registered Jul 2 00:43:30.947470 kernel: acpiphp: Slot [19] registered Jul 2 00:43:30.947486 kernel: acpiphp: Slot [20] registered Jul 2 00:43:30.947502 kernel: acpiphp: Slot [21] registered Jul 2 00:43:30.947518 kernel: acpiphp: Slot [22] registered Jul 2 00:43:30.947534 kernel: acpiphp: Slot [23] registered Jul 2 00:43:30.947550 kernel: acpiphp: Slot [24] registered Jul 2 00:43:30.947565 kernel: acpiphp: Slot [25] registered Jul 2 00:43:30.947581 kernel: acpiphp: Slot [26] registered Jul 2 00:43:30.947601 kernel: acpiphp: Slot [27] registered Jul 2 00:43:30.947617 kernel: acpiphp: Slot [28] registered Jul 2 00:43:30.947633 kernel: acpiphp: Slot [29] registered Jul 2 00:43:30.947649 kernel: acpiphp: Slot [30] registered Jul 2 00:43:30.947665 kernel: acpiphp: Slot [31] registered Jul 2 00:43:30.947681 kernel: PCI host bridge to bus 0000:00 Jul 2 00:43:30.947892 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 2 00:43:30.948072 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 00:43:30.948277 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 2 00:43:30.948459 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 2 00:43:30.948676 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 2 00:43:30.948889 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 2 00:43:30.949090 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 2 00:43:30.954408 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 2 00:43:30.954651 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 2 00:43:30.954852 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 00:43:30.955067 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 2 00:43:30.955292 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 2 00:43:30.955504 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 2 00:43:30.955709 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 2 00:43:30.955914 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 00:43:30.956116 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 2 00:43:30.956346 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 2 00:43:30.956552 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 2 00:43:30.956754 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 2 00:43:30.956968 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 2 00:43:30.957159 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 2 00:43:30.965400 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 00:43:30.965668 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 2 00:43:30.965710 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 00:43:30.965729 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 00:43:30.965746 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 00:43:30.965763 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 00:43:30.965780 kernel: iommu: Default domain type: Translated Jul 2 00:43:30.965797 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 00:43:30.965814 kernel: vgaarb: loaded Jul 2 00:43:30.965830 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 00:43:30.965855 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 00:43:30.965872 kernel: PTP clock support registered Jul 2 00:43:30.965888 kernel: Registered efivars operations Jul 2 00:43:30.965905 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 00:43:30.965921 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:43:30.965938 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:43:30.965955 kernel: pnp: PnP ACPI init Jul 2 00:43:30.966246 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 2 00:43:30.966287 kernel: pnp: PnP ACPI: found 1 devices Jul 2 00:43:30.966305 kernel: NET: Registered PF_INET protocol family Jul 2 00:43:30.966322 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:43:30.966339 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 00:43:30.966356 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:43:30.966373 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:43:30.966390 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 2 00:43:30.966406 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 00:43:30.966423 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:43:30.966444 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:43:30.966461 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:43:30.966477 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:43:30.966494 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 2 00:43:30.966510 kernel: kvm [1]: HYP mode not available Jul 2 00:43:30.966527 kernel: Initialise system trusted keyrings Jul 2 00:43:30.966543 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 00:43:30.966560 kernel: Key type asymmetric registered Jul 2 00:43:30.966576 kernel: Asymmetric key parser 'x509' registered Jul 2 00:43:30.966597 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 00:43:30.966614 kernel: io scheduler mq-deadline registered Jul 2 00:43:30.966630 kernel: io scheduler kyber registered Jul 2 00:43:30.966646 kernel: io scheduler bfq registered Jul 2 00:43:30.966873 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 2 00:43:30.966902 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 00:43:30.966920 kernel: ACPI: button: Power Button [PWRB] Jul 2 00:43:30.966937 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 2 00:43:30.966962 kernel: ACPI: button: Sleep Button [SLPB] Jul 2 00:43:30.966979 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:43:30.966997 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 2 00:43:30.967241 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 2 00:43:30.967272 kernel: printk: console [ttyS0] disabled Jul 2 00:43:30.967289 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 2 00:43:30.967306 kernel: printk: console [ttyS0] enabled Jul 2 00:43:30.967322 kernel: printk: bootconsole [uart0] disabled Jul 2 00:43:30.967338 kernel: thunder_xcv, ver 1.0 Jul 2 00:43:30.967354 kernel: thunder_bgx, ver 1.0 Jul 2 00:43:30.967378 kernel: nicpf, ver 1.0 Jul 2 00:43:30.967395 kernel: nicvf, ver 1.0 Jul 2 00:43:30.967611 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 00:43:30.967801 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T00:43:30 UTC (1719881010) Jul 2 00:43:30.967824 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 00:43:30.967841 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:43:30.967857 kernel: Segment Routing with IPv6 Jul 2 00:43:30.967874 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:43:30.967895 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:43:30.967912 kernel: Key type dns_resolver registered Jul 2 00:43:30.967928 kernel: registered taskstats version 1 Jul 2 00:43:30.967944 kernel: Loading compiled-in X.509 certificates Jul 2 00:43:30.967960 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: c418313b450e4055b23e41c11cb6dc415de0265d' Jul 2 00:43:30.967977 kernel: Key type .fscrypt registered Jul 2 00:43:30.967993 kernel: Key type fscrypt-provisioning registered Jul 2 00:43:30.968009 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:43:30.968025 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:43:30.968046 kernel: ima: No architecture policies found Jul 2 00:43:30.968062 kernel: clk: Disabling unused clocks Jul 2 00:43:30.968079 kernel: Freeing unused kernel memory: 36352K Jul 2 00:43:30.968095 kernel: Run /init as init process Jul 2 00:43:30.968111 kernel: with arguments: Jul 2 00:43:30.968127 kernel: /init Jul 2 00:43:30.968143 kernel: with environment: Jul 2 00:43:30.968159 kernel: HOME=/ Jul 2 00:43:30.968175 kernel: TERM=linux Jul 2 00:43:30.968195 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:43:30.968241 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 00:43:30.968265 systemd[1]: Detected virtualization amazon. Jul 2 00:43:30.968283 systemd[1]: Detected architecture arm64. Jul 2 00:43:30.968301 systemd[1]: Running in initrd. Jul 2 00:43:30.968318 systemd[1]: No hostname configured, using default hostname. Jul 2 00:43:30.968336 systemd[1]: Hostname set to . Jul 2 00:43:30.968360 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:43:30.968378 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:43:30.968396 systemd[1]: Started systemd-ask-password-console.path. Jul 2 00:43:30.968413 systemd[1]: Reached target cryptsetup.target. Jul 2 00:43:30.968431 systemd[1]: Reached target paths.target. Jul 2 00:43:30.968448 systemd[1]: Reached target slices.target. Jul 2 00:43:30.968466 systemd[1]: Reached target swap.target. Jul 2 00:43:30.968483 systemd[1]: Reached target timers.target. Jul 2 00:43:30.968505 systemd[1]: Listening on iscsid.socket. Jul 2 00:43:30.968523 systemd[1]: Listening on iscsiuio.socket. Jul 2 00:43:30.968541 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 00:43:30.968558 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 00:43:30.968577 systemd[1]: Listening on systemd-journald.socket. Jul 2 00:43:30.968594 systemd[1]: Listening on systemd-networkd.socket. Jul 2 00:43:30.968612 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 00:43:30.968630 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 00:43:30.968651 systemd[1]: Reached target sockets.target. Jul 2 00:43:30.968669 systemd[1]: Starting kmod-static-nodes.service... Jul 2 00:43:30.968687 systemd[1]: Finished network-cleanup.service. Jul 2 00:43:30.968704 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:43:30.968722 systemd[1]: Starting systemd-journald.service... Jul 2 00:43:30.968739 systemd[1]: Starting systemd-modules-load.service... Jul 2 00:43:30.968757 systemd[1]: Starting systemd-resolved.service... Jul 2 00:43:30.968775 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 00:43:30.968793 systemd[1]: Finished kmod-static-nodes.service. Jul 2 00:43:30.968815 kernel: audit: type=1130 audit(1719881010.957:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:30.968833 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:43:30.968854 systemd-journald[309]: Journal started Jul 2 00:43:30.968940 systemd-journald[309]: Runtime Journal (/run/log/journal/ec276d494ad9f9e2909c4fa0d0bac2fa) is 8.0M, max 75.4M, 67.4M free. Jul 2 00:43:30.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:30.939914 systemd-modules-load[310]: Inserted module 'overlay' Jul 2 00:43:30.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:30.988230 kernel: audit: type=1130 audit(1719881010.980:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:30.988269 systemd[1]: Started systemd-journald.service. Jul 2 00:43:31.003244 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:43:31.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:31.005792 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 00:43:31.018338 kernel: audit: type=1130 audit(1719881011.003:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:31.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:31.027880 systemd-modules-load[310]: Inserted module 'br_netfilter' Jul 2 00:43:31.031525 kernel: audit: type=1130 audit(1719881011.017:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:31.031562 kernel: Bridge firewalling registered Jul 2 00:43:31.042597 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 00:43:31.050292 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 00:43:31.064264 kernel: SCSI subsystem initialized Jul 2 00:43:31.080162 systemd-resolved[311]: Positive Trust Anchors: Jul 2 00:43:31.080192 systemd-resolved[311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:43:31.089035 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 00:43:31.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:31.096601 systemd-resolved[311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 00:43:31.104938 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 00:43:31.128389 kernel: audit: type=1130 audit(1719881011.095:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:31.128438 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:43:31.128468 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:43:31.128492 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 00:43:31.128515 kernel: audit: type=1130 audit(1719881011.112:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:31.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:31.116435 systemd[1]: Starting dracut-cmdline.service... Jul 2 00:43:31.143759 systemd-modules-load[310]: Inserted module 'dm_multipath' Jul 2 00:43:31.147618 systemd[1]: Finished systemd-modules-load.service. Jul 2 00:43:31.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:31.157249 kernel: audit: type=1130 audit(1719881011.149:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:31.157927 systemd[1]: Starting systemd-sysctl.service... Jul 2 00:43:31.160403 dracut-cmdline[327]: dracut-dracut-053 Jul 2 00:43:31.173432 dracut-cmdline[327]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7b86ecfcd4701bdf4668db795601b20c118ac0b117c34a9b3836e0a5236b73b0 Jul 2 00:43:31.198882 systemd[1]: Finished systemd-sysctl.service. Jul 2 00:43:31.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:31.211246 kernel: audit: type=1130 audit(1719881011.201:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:31.312252 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:43:31.333256 kernel: iscsi: registered transport (tcp) Jul 2 00:43:31.360107 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:43:31.360191 kernel: QLogic iSCSI HBA Driver Jul 2 00:43:31.543934 systemd-resolved[311]: Defaulting to hostname 'linux'. Jul 2 00:43:31.546394 kernel: random: crng init done Jul 2 00:43:31.548082 systemd[1]: Started systemd-resolved.service. Jul 2 00:43:31.559373 kernel: audit: type=1130 audit(1719881011.548:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:31.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:31.550035 systemd[1]: Reached target nss-lookup.target. Jul 2 00:43:31.574884 systemd[1]: Finished dracut-cmdline.service. Jul 2 00:43:31.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:31.579173 systemd[1]: Starting dracut-pre-udev.service... Jul 2 00:43:31.644273 kernel: raid6: neonx8 gen() 6408 MB/s Jul 2 00:43:31.662257 kernel: raid6: neonx8 xor() 4716 MB/s Jul 2 00:43:31.680242 kernel: raid6: neonx4 gen() 6566 MB/s Jul 2 00:43:31.698244 kernel: raid6: neonx4 xor() 4894 MB/s Jul 2 00:43:31.716244 kernel: raid6: neonx2 gen() 5784 MB/s Jul 2 00:43:31.734243 kernel: raid6: neonx2 xor() 4507 MB/s Jul 2 00:43:31.752243 kernel: raid6: neonx1 gen() 4482 MB/s Jul 2 00:43:31.770252 kernel: raid6: neonx1 xor() 3660 MB/s Jul 2 00:43:31.788246 kernel: raid6: int64x8 gen() 3423 MB/s Jul 2 00:43:31.806254 kernel: raid6: int64x8 xor() 2066 MB/s Jul 2 00:43:31.824255 kernel: raid6: int64x4 gen() 3820 MB/s Jul 2 00:43:31.842259 kernel: raid6: int64x4 xor() 2178 MB/s Jul 2 00:43:31.860243 kernel: raid6: int64x2 gen() 3597 MB/s Jul 2 00:43:31.878249 kernel: raid6: int64x2 xor() 1935 MB/s Jul 2 00:43:31.896248 kernel: raid6: int64x1 gen() 2762 MB/s Jul 2 00:43:31.915194 kernel: raid6: int64x1 xor() 1440 MB/s Jul 2 00:43:31.915273 kernel: raid6: using algorithm neonx4 gen() 6566 MB/s Jul 2 00:43:31.915297 kernel: raid6: .... xor() 4894 MB/s, rmw enabled Jul 2 00:43:31.916725 kernel: raid6: using neon recovery algorithm Jul 2 00:43:31.935246 kernel: xor: measuring software checksum speed Jul 2 00:43:31.937246 kernel: 8regs : 9293 MB/sec Jul 2 00:43:31.939240 kernel: 32regs : 11085 MB/sec Jul 2 00:43:31.943140 kernel: arm64_neon : 9614 MB/sec Jul 2 00:43:31.943174 kernel: xor: using function: 32regs (11085 MB/sec) Jul 2 00:43:32.034259 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 2 00:43:32.051645 systemd[1]: Finished dracut-pre-udev.service. Jul 2 00:43:32.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:32.053000 audit: BPF prog-id=7 op=LOAD Jul 2 00:43:32.053000 audit: BPF prog-id=8 op=LOAD Jul 2 00:43:32.056014 systemd[1]: Starting systemd-udevd.service... Jul 2 00:43:32.083007 systemd-udevd[508]: Using default interface naming scheme 'v252'. Jul 2 00:43:32.093352 systemd[1]: Started systemd-udevd.service. Jul 2 00:43:32.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:32.098705 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 00:43:32.133843 dracut-pre-trigger[518]: rd.md=0: removing MD RAID activation Jul 2 00:43:32.195377 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 00:43:32.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:32.199631 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 00:43:32.308811 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 00:43:32.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:32.440296 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 00:43:32.440381 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 2 00:43:32.448053 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 2 00:43:32.448420 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 2 00:43:32.454929 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 2 00:43:32.454992 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 2 00:43:32.455316 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:50:11:de:0d:cb Jul 2 00:43:32.467250 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 2 00:43:32.472163 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:43:32.472201 kernel: GPT:9289727 != 16777215 Jul 2 00:43:32.472250 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:43:32.474063 kernel: GPT:9289727 != 16777215 Jul 2 00:43:32.475191 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:43:32.478108 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:43:32.481156 (udev-worker)[571]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:43:32.548249 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (567) Jul 2 00:43:32.571035 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 00:43:32.645798 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 00:43:32.671554 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 00:43:32.676132 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 00:43:32.698475 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 00:43:32.703457 systemd[1]: Starting disk-uuid.service... Jul 2 00:43:32.731368 disk-uuid[675]: Primary Header is updated. Jul 2 00:43:32.731368 disk-uuid[675]: Secondary Entries is updated. Jul 2 00:43:32.731368 disk-uuid[675]: Secondary Header is updated. Jul 2 00:43:32.743252 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:43:32.753241 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:43:32.763252 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:43:33.758260 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:43:33.758815 disk-uuid[676]: The operation has completed successfully. Jul 2 00:43:33.936805 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:43:33.937511 systemd[1]: Finished disk-uuid.service. Jul 2 00:43:33.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:33.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:33.962419 systemd[1]: Starting verity-setup.service... Jul 2 00:43:33.998260 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 00:43:34.095199 systemd[1]: Found device dev-mapper-usr.device. Jul 2 00:43:34.101332 systemd[1]: Mounting sysusr-usr.mount... Jul 2 00:43:34.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.108095 systemd[1]: Finished verity-setup.service. Jul 2 00:43:34.192249 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 00:43:34.193504 systemd[1]: Mounted sysusr-usr.mount. Jul 2 00:43:34.196082 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 00:43:34.199724 systemd[1]: Starting ignition-setup.service... Jul 2 00:43:34.210616 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 00:43:34.227705 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:43:34.227777 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:43:34.229621 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 2 00:43:34.237266 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:43:34.255018 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:43:34.288479 systemd[1]: Finished ignition-setup.service. Jul 2 00:43:34.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.292687 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 00:43:34.357977 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 00:43:34.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.361000 audit: BPF prog-id=9 op=LOAD Jul 2 00:43:34.363317 systemd[1]: Starting systemd-networkd.service... Jul 2 00:43:34.410336 systemd-networkd[1188]: lo: Link UP Jul 2 00:43:34.410359 systemd-networkd[1188]: lo: Gained carrier Jul 2 00:43:34.413793 systemd-networkd[1188]: Enumeration completed Jul 2 00:43:34.414410 systemd-networkd[1188]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:43:34.418631 systemd[1]: Started systemd-networkd.service. Jul 2 00:43:34.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.420947 systemd[1]: Reached target network.target. Jul 2 00:43:34.425945 systemd[1]: Starting iscsiuio.service... Jul 2 00:43:34.431632 systemd-networkd[1188]: eth0: Link UP Jul 2 00:43:34.432063 systemd-networkd[1188]: eth0: Gained carrier Jul 2 00:43:34.439994 systemd[1]: Started iscsiuio.service. Jul 2 00:43:34.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.445897 systemd[1]: Starting iscsid.service... Jul 2 00:43:34.454428 systemd-networkd[1188]: eth0: DHCPv4 address 172.31.27.155/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 00:43:34.457796 iscsid[1193]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 00:43:34.457796 iscsid[1193]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 00:43:34.457796 iscsid[1193]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 00:43:34.457796 iscsid[1193]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 00:43:34.473091 iscsid[1193]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 00:43:34.473091 iscsid[1193]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 00:43:34.480327 systemd[1]: Started iscsid.service. Jul 2 00:43:34.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.494816 systemd[1]: Starting dracut-initqueue.service... Jul 2 00:43:34.518693 systemd[1]: Finished dracut-initqueue.service. Jul 2 00:43:34.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.521754 systemd[1]: Reached target remote-fs-pre.target. Jul 2 00:43:34.524543 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 00:43:34.527384 systemd[1]: Reached target remote-fs.target. Jul 2 00:43:34.531694 systemd[1]: Starting dracut-pre-mount.service... Jul 2 00:43:34.551399 systemd[1]: Finished dracut-pre-mount.service. Jul 2 00:43:34.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:35.346160 ignition[1134]: Ignition 2.14.0 Jul 2 00:43:35.347706 ignition[1134]: Stage: fetch-offline Jul 2 00:43:35.349401 ignition[1134]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 00:43:35.351560 ignition[1134]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 2 00:43:35.376932 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:43:35.379237 ignition[1134]: Ignition finished successfully Jul 2 00:43:35.382527 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 00:43:35.397626 kernel: kauditd_printk_skb: 18 callbacks suppressed Jul 2 00:43:35.397682 kernel: audit: type=1130 audit(1719881015.385:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:35.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:35.387769 systemd[1]: Starting ignition-fetch.service... Jul 2 00:43:35.411693 ignition[1212]: Ignition 2.14.0 Jul 2 00:43:35.411722 ignition[1212]: Stage: fetch Jul 2 00:43:35.412028 ignition[1212]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 00:43:35.412087 ignition[1212]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 2 00:43:35.425589 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:43:35.427633 ignition[1212]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:43:35.435260 ignition[1212]: INFO : PUT result: OK Jul 2 00:43:35.438451 ignition[1212]: DEBUG : parsed url from cmdline: "" Jul 2 00:43:35.438451 ignition[1212]: INFO : no config URL provided Jul 2 00:43:35.438451 ignition[1212]: INFO : reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:43:35.443942 ignition[1212]: INFO : no config at "/usr/lib/ignition/user.ign" Jul 2 00:43:35.443942 ignition[1212]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:43:35.443942 ignition[1212]: INFO : PUT result: OK Jul 2 00:43:35.443942 ignition[1212]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 2 00:43:35.453868 ignition[1212]: INFO : GET result: OK Jul 2 00:43:35.453868 ignition[1212]: DEBUG : parsing config with SHA512: 75e5ce15889c994ba77a13990885e5e8a830f42880ad1e28daf3dd2dbf9de991064eb8b9f5ddb850e408d9967a8c6d9a7812d59543603e4f9699a1b7388c251f Jul 2 00:43:35.461919 unknown[1212]: fetched base config from "system" Jul 2 00:43:35.469345 ignition[1212]: fetch: fetch complete Jul 2 00:43:35.461937 unknown[1212]: fetched base config from "system" Jul 2 00:43:35.469361 ignition[1212]: fetch: fetch passed Jul 2 00:43:35.461952 unknown[1212]: fetched user config from "aws" Jul 2 00:43:35.469479 ignition[1212]: Ignition finished successfully Jul 2 00:43:35.479381 systemd[1]: Finished ignition-fetch.service. Jul 2 00:43:35.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:35.489713 systemd[1]: Starting ignition-kargs.service... Jul 2 00:43:35.493842 kernel: audit: type=1130 audit(1719881015.478:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:35.507710 ignition[1218]: Ignition 2.14.0 Jul 2 00:43:35.509334 ignition[1218]: Stage: kargs Jul 2 00:43:35.510768 ignition[1218]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 00:43:35.512855 ignition[1218]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 2 00:43:35.523826 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:43:35.525946 ignition[1218]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:43:35.528229 ignition[1218]: INFO : PUT result: OK Jul 2 00:43:35.533747 ignition[1218]: kargs: kargs passed Jul 2 00:43:35.533863 ignition[1218]: Ignition finished successfully Jul 2 00:43:35.540869 systemd[1]: Finished ignition-kargs.service. Jul 2 00:43:35.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:35.550281 kernel: audit: type=1130 audit(1719881015.542:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:35.545917 systemd[1]: Starting ignition-disks.service... Jul 2 00:43:35.557501 systemd-networkd[1188]: eth0: Gained IPv6LL Jul 2 00:43:35.562612 ignition[1224]: Ignition 2.14.0 Jul 2 00:43:35.562638 ignition[1224]: Stage: disks Jul 2 00:43:35.562944 ignition[1224]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 00:43:35.563008 ignition[1224]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 2 00:43:35.577858 ignition[1224]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:43:35.580279 ignition[1224]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:43:35.582924 ignition[1224]: INFO : PUT result: OK Jul 2 00:43:35.589355 ignition[1224]: disks: disks passed Jul 2 00:43:35.589644 ignition[1224]: Ignition finished successfully Jul 2 00:43:35.593544 systemd[1]: Finished ignition-disks.service. Jul 2 00:43:35.611445 kernel: audit: type=1130 audit(1719881015.593:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:35.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:35.594668 systemd[1]: Reached target initrd-root-device.target. Jul 2 00:43:35.594881 systemd[1]: Reached target local-fs-pre.target. Jul 2 00:43:35.595153 systemd[1]: Reached target local-fs.target. Jul 2 00:43:35.595762 systemd[1]: Reached target sysinit.target. Jul 2 00:43:35.596045 systemd[1]: Reached target basic.target. Jul 2 00:43:35.604918 systemd[1]: Starting systemd-fsck-root.service... Jul 2 00:43:35.651479 systemd-fsck[1232]: ROOT: clean, 614/553520 files, 56019/553472 blocks Jul 2 00:43:35.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:35.658939 systemd[1]: Finished systemd-fsck-root.service. Jul 2 00:43:35.669694 systemd[1]: Mounting sysroot.mount... Jul 2 00:43:35.674063 kernel: audit: type=1130 audit(1719881015.659:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:35.692261 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 00:43:35.693416 systemd[1]: Mounted sysroot.mount. Jul 2 00:43:35.694465 systemd[1]: Reached target initrd-root-fs.target. Jul 2 00:43:35.708497 systemd[1]: Mounting sysroot-usr.mount... Jul 2 00:43:35.715853 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 00:43:35.715946 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:43:35.718348 systemd[1]: Reached target ignition-diskful.target. Jul 2 00:43:35.726107 systemd[1]: Mounted sysroot-usr.mount. Jul 2 00:43:35.745908 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 00:43:35.753322 systemd[1]: Starting initrd-setup-root.service... Jul 2 00:43:35.766935 initrd-setup-root[1254]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:43:35.769234 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1249) Jul 2 00:43:35.774836 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:43:35.774900 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:43:35.776922 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 2 00:43:35.783249 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:43:35.788230 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 00:43:35.792927 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:43:35.800585 initrd-setup-root[1288]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:43:35.810036 initrd-setup-root[1296]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:43:36.076864 systemd[1]: Finished initrd-setup-root.service. Jul 2 00:43:36.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:36.081020 systemd[1]: Starting ignition-mount.service... Jul 2 00:43:36.088835 kernel: audit: type=1130 audit(1719881016.078:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:36.090021 systemd[1]: Starting sysroot-boot.service... Jul 2 00:43:36.101025 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 00:43:36.101200 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 00:43:36.135572 ignition[1316]: INFO : Ignition 2.14.0 Jul 2 00:43:36.135572 ignition[1316]: INFO : Stage: mount Jul 2 00:43:36.138835 ignition[1316]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 00:43:36.138835 ignition[1316]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 2 00:43:36.153395 ignition[1316]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:43:36.153395 ignition[1316]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:43:36.164759 kernel: audit: type=1130 audit(1719881016.153:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:36.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:36.147871 systemd[1]: Finished sysroot-boot.service. Jul 2 00:43:36.166522 ignition[1316]: INFO : PUT result: OK Jul 2 00:43:36.171337 ignition[1316]: INFO : mount: mount passed Jul 2 00:43:36.172906 ignition[1316]: INFO : Ignition finished successfully Jul 2 00:43:36.176109 systemd[1]: Finished ignition-mount.service. Jul 2 00:43:36.188403 kernel: audit: type=1130 audit(1719881016.176:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:36.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:36.187372 systemd[1]: Starting ignition-files.service... Jul 2 00:43:36.197272 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 00:43:36.214352 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1325) Jul 2 00:43:36.219060 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:43:36.219095 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:43:36.219128 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 2 00:43:36.227244 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:43:36.232156 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 00:43:36.250135 ignition[1344]: INFO : Ignition 2.14.0 Jul 2 00:43:36.250135 ignition[1344]: INFO : Stage: files Jul 2 00:43:36.253610 ignition[1344]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 00:43:36.253610 ignition[1344]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 2 00:43:36.266094 ignition[1344]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:43:36.268551 ignition[1344]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:43:36.271618 ignition[1344]: INFO : PUT result: OK Jul 2 00:43:36.276691 ignition[1344]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:43:36.280046 ignition[1344]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:43:36.280046 ignition[1344]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:43:36.319041 ignition[1344]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:43:36.321675 ignition[1344]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:43:36.324977 unknown[1344]: wrote ssh authorized keys file for user: core Jul 2 00:43:36.327038 ignition[1344]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:43:36.330411 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:43:36.333788 ignition[1344]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 00:43:36.425845 ignition[1344]: INFO : GET result: OK Jul 2 00:43:36.555954 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:43:36.559824 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:43:36.559824 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:43:36.559824 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Jul 2 00:43:36.559824 ignition[1344]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 2 00:43:36.579798 ignition[1344]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2686426068" Jul 2 00:43:36.588765 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1349) Jul 2 00:43:36.588805 ignition[1344]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2686426068": device or resource busy Jul 2 00:43:36.588805 ignition[1344]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2686426068", trying btrfs: device or resource busy Jul 2 00:43:36.588805 ignition[1344]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2686426068" Jul 2 00:43:36.597990 ignition[1344]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2686426068" Jul 2 00:43:36.601608 ignition[1344]: INFO : op(3): [started] unmounting "/mnt/oem2686426068" Jul 2 00:43:36.603735 ignition[1344]: INFO : op(3): [finished] unmounting "/mnt/oem2686426068" Jul 2 00:43:36.603735 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Jul 2 00:43:36.603735 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:43:36.603735 ignition[1344]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 2 00:43:36.963653 ignition[1344]: INFO : GET result: OK Jul 2 00:43:37.131354 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:43:37.136525 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:43:37.136525 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:43:37.136525 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:43:37.136525 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:43:37.136525 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:43:37.136525 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:43:37.136525 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:43:37.136525 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:43:37.136525 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 00:43:37.136525 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 00:43:37.136525 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 2 00:43:37.136525 ignition[1344]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 2 00:43:37.182262 ignition[1344]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem865689769" Jul 2 00:43:37.182262 ignition[1344]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem865689769": device or resource busy Jul 2 00:43:37.182262 ignition[1344]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem865689769", trying btrfs: device or resource busy Jul 2 00:43:37.182262 ignition[1344]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem865689769" Jul 2 00:43:37.192992 ignition[1344]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem865689769" Jul 2 00:43:37.192992 ignition[1344]: INFO : op(6): [started] unmounting "/mnt/oem865689769" Jul 2 00:43:37.192992 ignition[1344]: INFO : op(6): [finished] unmounting "/mnt/oem865689769" Jul 2 00:43:37.192992 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 2 00:43:37.192992 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Jul 2 00:43:37.192992 ignition[1344]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 2 00:43:37.221281 ignition[1344]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3622483672" Jul 2 00:43:37.221281 ignition[1344]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3622483672": device or resource busy Jul 2 00:43:37.221281 ignition[1344]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3622483672", trying btrfs: device or resource busy Jul 2 00:43:37.221281 ignition[1344]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3622483672" Jul 2 00:43:37.221281 ignition[1344]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3622483672" Jul 2 00:43:37.221281 ignition[1344]: INFO : op(9): [started] unmounting "/mnt/oem3622483672" Jul 2 00:43:37.221281 ignition[1344]: INFO : op(9): [finished] unmounting "/mnt/oem3622483672" Jul 2 00:43:37.221281 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Jul 2 00:43:37.221281 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 00:43:37.221281 ignition[1344]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jul 2 00:43:37.604811 ignition[1344]: INFO : GET result: OK Jul 2 00:43:38.015889 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 00:43:38.019828 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Jul 2 00:43:38.019828 ignition[1344]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 2 00:43:38.040249 ignition[1344]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem935006483" Jul 2 00:43:38.043172 ignition[1344]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem935006483": device or resource busy Jul 2 00:43:38.043172 ignition[1344]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem935006483", trying btrfs: device or resource busy Jul 2 00:43:38.043172 ignition[1344]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem935006483" Jul 2 00:43:38.051839 ignition[1344]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem935006483" Jul 2 00:43:38.051839 ignition[1344]: INFO : op(c): [started] unmounting "/mnt/oem935006483" Jul 2 00:43:38.056156 ignition[1344]: INFO : op(c): [finished] unmounting "/mnt/oem935006483" Jul 2 00:43:38.056156 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Jul 2 00:43:38.056156 ignition[1344]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 2 00:43:38.056156 ignition[1344]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 2 00:43:38.056156 ignition[1344]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Jul 2 00:43:38.056156 ignition[1344]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Jul 2 00:43:38.056156 ignition[1344]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Jul 2 00:43:38.056156 ignition[1344]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Jul 2 00:43:38.056156 ignition[1344]: INFO : files: op(13): [started] processing unit "nvidia.service" Jul 2 00:43:38.056156 ignition[1344]: INFO : files: op(13): [finished] processing unit "nvidia.service" Jul 2 00:43:38.056156 ignition[1344]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Jul 2 00:43:38.056156 ignition[1344]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:43:38.056156 ignition[1344]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:43:38.056156 ignition[1344]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Jul 2 00:43:38.056156 ignition[1344]: INFO : files: op(16): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 00:43:38.056156 ignition[1344]: INFO : files: op(16): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 00:43:38.056156 ignition[1344]: INFO : files: op(17): [started] setting preset to enabled for "amazon-ssm-agent.service" Jul 2 00:43:38.056156 ignition[1344]: INFO : files: op(17): [finished] setting preset to enabled for "amazon-ssm-agent.service" Jul 2 00:43:38.056156 ignition[1344]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Jul 2 00:43:38.056156 ignition[1344]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Jul 2 00:43:38.056156 ignition[1344]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:43:38.109878 ignition[1344]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:43:38.120523 ignition[1344]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:43:38.123724 ignition[1344]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:43:38.123724 ignition[1344]: INFO : files: files passed Jul 2 00:43:38.123724 ignition[1344]: INFO : Ignition finished successfully Jul 2 00:43:38.129965 systemd[1]: Finished ignition-files.service. Jul 2 00:43:38.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.140236 kernel: audit: type=1130 audit(1719881018.132:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.144375 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 00:43:38.146504 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 00:43:38.150183 systemd[1]: Starting ignition-quench.service... Jul 2 00:43:38.163322 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:43:38.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.163523 systemd[1]: Finished ignition-quench.service. Jul 2 00:43:38.175398 kernel: audit: type=1130 audit(1719881018.164:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.176575 initrd-setup-root-after-ignition[1369]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:43:38.180420 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 00:43:38.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.183957 systemd[1]: Reached target ignition-complete.target. Jul 2 00:43:38.187069 systemd[1]: Starting initrd-parse-etc.service... Jul 2 00:43:38.215725 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:43:38.216085 systemd[1]: Finished initrd-parse-etc.service. Jul 2 00:43:38.220688 systemd[1]: Reached target initrd-fs.target. Jul 2 00:43:38.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.225175 systemd[1]: Reached target initrd.target. Jul 2 00:43:38.228020 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 00:43:38.231760 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 00:43:38.256194 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 00:43:38.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.260463 systemd[1]: Starting initrd-cleanup.service... Jul 2 00:43:38.280303 systemd[1]: Stopped target nss-lookup.target. Jul 2 00:43:38.283484 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 00:43:38.286762 systemd[1]: Stopped target timers.target. Jul 2 00:43:38.289541 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:43:38.291468 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 00:43:38.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.294794 systemd[1]: Stopped target initrd.target. Jul 2 00:43:38.297633 systemd[1]: Stopped target basic.target. Jul 2 00:43:38.300411 systemd[1]: Stopped target ignition-complete.target. Jul 2 00:43:38.303677 systemd[1]: Stopped target ignition-diskful.target. Jul 2 00:43:38.306805 systemd[1]: Stopped target initrd-root-device.target. Jul 2 00:43:38.310050 systemd[1]: Stopped target remote-fs.target. Jul 2 00:43:38.312875 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 00:43:38.315918 systemd[1]: Stopped target sysinit.target. Jul 2 00:43:38.319794 systemd[1]: Stopped target local-fs.target. Jul 2 00:43:38.322637 systemd[1]: Stopped target local-fs-pre.target. Jul 2 00:43:38.325563 systemd[1]: Stopped target swap.target. Jul 2 00:43:38.328095 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:43:38.330005 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 00:43:38.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.333058 systemd[1]: Stopped target cryptsetup.target. Jul 2 00:43:38.335934 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:43:38.337812 systemd[1]: Stopped dracut-initqueue.service. Jul 2 00:43:38.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.340822 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:43:38.343103 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 00:43:38.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.346890 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:43:38.348753 systemd[1]: Stopped ignition-files.service. Jul 2 00:43:38.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.353186 systemd[1]: Stopping ignition-mount.service... Jul 2 00:43:38.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.356421 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:43:38.356697 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 00:43:38.359930 systemd[1]: Stopping sysroot-boot.service... Jul 2 00:43:38.363655 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:43:38.363976 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 00:43:38.366686 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:43:38.366900 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 00:43:38.379832 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:43:38.380342 systemd[1]: Finished initrd-cleanup.service. Jul 2 00:43:38.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.402846 ignition[1382]: INFO : Ignition 2.14.0 Jul 2 00:43:38.402846 ignition[1382]: INFO : Stage: umount Jul 2 00:43:38.405944 ignition[1382]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 00:43:38.405944 ignition[1382]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 2 00:43:38.413226 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:43:38.424342 ignition[1382]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:43:38.426912 ignition[1382]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:43:38.429950 ignition[1382]: INFO : PUT result: OK Jul 2 00:43:38.433013 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:43:38.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.433757 systemd[1]: Stopped sysroot-boot.service. Jul 2 00:43:38.440577 ignition[1382]: INFO : umount: umount passed Jul 2 00:43:38.442185 ignition[1382]: INFO : Ignition finished successfully Jul 2 00:43:38.445479 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:43:38.446422 systemd[1]: Stopped ignition-mount.service. Jul 2 00:43:38.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.448885 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:43:38.448992 systemd[1]: Stopped ignition-disks.service. Jul 2 00:43:38.450584 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:43:38.450681 systemd[1]: Stopped ignition-kargs.service. Jul 2 00:43:38.452231 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:43:38.452322 systemd[1]: Stopped ignition-fetch.service. Jul 2 00:43:38.453847 systemd[1]: Stopped target network.target. Jul 2 00:43:38.455286 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:43:38.456108 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 00:43:38.458158 systemd[1]: Stopped target paths.target. Jul 2 00:43:38.459497 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:43:38.460022 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 00:43:38.480166 systemd[1]: Stopped target slices.target. Jul 2 00:43:38.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.481782 systemd[1]: Stopped target sockets.target. Jul 2 00:43:38.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.483304 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:43:38.483522 systemd[1]: Closed iscsid.socket. Jul 2 00:43:38.485990 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:43:38.486046 systemd[1]: Closed iscsiuio.socket. Jul 2 00:43:38.487308 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:43:38.487399 systemd[1]: Stopped ignition-setup.service. Jul 2 00:43:38.489147 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:43:38.489396 systemd[1]: Stopped initrd-setup-root.service. Jul 2 00:43:38.492143 systemd[1]: Stopping systemd-networkd.service... Jul 2 00:43:38.494507 systemd[1]: Stopping systemd-resolved.service... Jul 2 00:43:38.509914 systemd-networkd[1188]: eth0: DHCPv6 lease lost Jul 2 00:43:38.513002 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:43:38.514795 systemd[1]: Stopped systemd-networkd.service. Jul 2 00:43:38.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.516563 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:43:38.516634 systemd[1]: Closed systemd-networkd.socket. Jul 2 00:43:38.524000 audit: BPF prog-id=9 op=UNLOAD Jul 2 00:43:38.523423 systemd[1]: Stopping network-cleanup.service... Jul 2 00:43:38.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.525680 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:43:38.525812 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 00:43:38.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.530442 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:43:38.530542 systemd[1]: Stopped systemd-sysctl.service. Jul 2 00:43:38.532338 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:43:38.532647 systemd[1]: Stopped systemd-modules-load.service. Jul 2 00:43:38.539464 systemd[1]: Stopping systemd-udevd.service... Jul 2 00:43:38.559404 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 00:43:38.560660 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:43:38.560859 systemd[1]: Stopped systemd-resolved.service. Jul 2 00:43:38.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.569000 audit: BPF prog-id=6 op=UNLOAD Jul 2 00:43:38.573436 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:43:38.575284 systemd[1]: Stopped systemd-udevd.service. Jul 2 00:43:38.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.578623 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:43:38.580460 systemd[1]: Stopped network-cleanup.service. Jul 2 00:43:38.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.583770 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:43:38.583869 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 00:43:38.597194 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:43:38.599080 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 00:43:38.601975 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:43:38.602081 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 00:43:38.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.606557 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:43:38.606644 systemd[1]: Stopped dracut-cmdline.service. Jul 2 00:43:38.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.609559 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:43:38.612687 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 00:43:38.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.617005 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 00:43:38.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.631345 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:43:38.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:38.631490 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 00:43:38.639009 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:43:38.639267 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 00:43:38.641468 systemd[1]: Reached target initrd-switch-root.target. Jul 2 00:43:38.645580 systemd[1]: Starting initrd-switch-root.service... Jul 2 00:43:38.662990 systemd[1]: Switching root. Jul 2 00:43:38.698017 iscsid[1193]: iscsid shutting down. Jul 2 00:43:38.700391 systemd-journald[309]: Received SIGTERM from PID 1 (n/a). Jul 2 00:43:38.700481 systemd-journald[309]: Journal stopped Jul 2 00:43:44.861250 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 00:43:44.861359 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 00:43:44.861393 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 00:43:44.861431 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:43:44.861475 kernel: SELinux: policy capability open_perms=1 Jul 2 00:43:44.861507 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:43:44.861537 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:43:44.861567 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:43:44.861596 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:43:44.861647 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:43:44.861683 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:43:44.861717 systemd[1]: Successfully loaded SELinux policy in 137.438ms. Jul 2 00:43:44.861778 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.685ms. Jul 2 00:43:44.861815 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 00:43:44.861850 systemd[1]: Detected virtualization amazon. Jul 2 00:43:44.861882 systemd[1]: Detected architecture arm64. Jul 2 00:43:44.861915 systemd[1]: Detected first boot. Jul 2 00:43:44.861948 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:43:44.861980 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 00:43:44.862011 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:43:44.862046 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:43:44.862080 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:43:44.862114 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:43:44.862146 kernel: kauditd_printk_skb: 54 callbacks suppressed Jul 2 00:43:44.862174 kernel: audit: type=1334 audit(1719881024.521:86): prog-id=12 op=LOAD Jul 2 00:43:44.862203 kernel: audit: type=1334 audit(1719881024.521:87): prog-id=3 op=UNLOAD Jul 2 00:43:44.862250 kernel: audit: type=1334 audit(1719881024.523:88): prog-id=13 op=LOAD Jul 2 00:43:44.862283 kernel: audit: type=1334 audit(1719881024.525:89): prog-id=14 op=LOAD Jul 2 00:43:44.862338 kernel: audit: type=1334 audit(1719881024.525:90): prog-id=4 op=UNLOAD Jul 2 00:43:44.862368 kernel: audit: type=1334 audit(1719881024.525:91): prog-id=5 op=UNLOAD Jul 2 00:43:44.862748 kernel: audit: type=1334 audit(1719881024.529:92): prog-id=15 op=LOAD Jul 2 00:43:44.862871 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 00:43:44.862905 kernel: audit: type=1334 audit(1719881024.530:93): prog-id=12 op=UNLOAD Jul 2 00:43:44.862937 systemd[1]: Stopped iscsiuio.service. Jul 2 00:43:44.862967 kernel: audit: type=1334 audit(1719881024.532:94): prog-id=16 op=LOAD Jul 2 00:43:44.862998 kernel: audit: type=1334 audit(1719881024.534:95): prog-id=17 op=LOAD Jul 2 00:43:44.863034 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 00:43:44.863066 systemd[1]: Stopped iscsid.service. Jul 2 00:43:44.863099 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:43:44.863128 systemd[1]: Stopped initrd-switch-root.service. Jul 2 00:43:44.863157 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:43:44.863187 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 00:43:44.863757 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 00:43:44.863800 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 2 00:43:44.863873 systemd[1]: Created slice system-getty.slice. Jul 2 00:43:44.863915 systemd[1]: Created slice system-modprobe.slice. Jul 2 00:43:44.863948 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 00:43:44.863980 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 00:43:44.864010 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 00:43:44.864041 systemd[1]: Created slice user.slice. Jul 2 00:43:44.864075 systemd[1]: Started systemd-ask-password-console.path. Jul 2 00:43:44.864104 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 00:43:44.864374 systemd[1]: Set up automount boot.automount. Jul 2 00:43:44.864406 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 00:43:44.864436 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 00:43:44.864465 systemd[1]: Stopped target initrd-fs.target. Jul 2 00:43:44.864496 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 00:43:44.864529 systemd[1]: Reached target integritysetup.target. Jul 2 00:43:44.864559 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 00:43:44.864593 systemd[1]: Reached target remote-fs.target. Jul 2 00:43:44.864623 systemd[1]: Reached target slices.target. Jul 2 00:43:44.864656 systemd[1]: Reached target swap.target. Jul 2 00:43:44.864688 systemd[1]: Reached target torcx.target. Jul 2 00:43:44.864725 systemd[1]: Reached target veritysetup.target. Jul 2 00:43:44.864764 systemd[1]: Listening on systemd-coredump.socket. Jul 2 00:43:44.864796 systemd[1]: Listening on systemd-initctl.socket. Jul 2 00:43:44.864826 systemd[1]: Listening on systemd-networkd.socket. Jul 2 00:43:44.864855 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 00:43:44.864884 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 00:43:44.864914 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 00:43:44.864945 systemd[1]: Mounting dev-hugepages.mount... Jul 2 00:43:44.864979 systemd[1]: Mounting dev-mqueue.mount... Jul 2 00:43:44.865009 systemd[1]: Mounting media.mount... Jul 2 00:43:44.865040 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 00:43:44.865072 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 00:43:44.865103 systemd[1]: Mounting tmp.mount... Jul 2 00:43:44.865133 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 00:43:44.865163 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:43:44.865193 systemd[1]: Starting kmod-static-nodes.service... Jul 2 00:43:44.865305 systemd[1]: Starting modprobe@configfs.service... Jul 2 00:43:44.865345 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:43:44.865375 systemd[1]: Starting modprobe@drm.service... Jul 2 00:43:44.865405 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:43:44.865436 systemd[1]: Starting modprobe@fuse.service... Jul 2 00:43:44.865466 systemd[1]: Starting modprobe@loop.service... Jul 2 00:43:44.865497 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:43:44.865530 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:43:44.865559 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 00:43:44.865589 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:43:44.865638 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:43:44.865675 systemd[1]: Stopped systemd-journald.service. Jul 2 00:43:44.865705 systemd[1]: Starting systemd-journald.service... Jul 2 00:43:44.865733 kernel: fuse: init (API version 7.34) Jul 2 00:43:44.865764 kernel: loop: module loaded Jul 2 00:43:44.865792 systemd[1]: Starting systemd-modules-load.service... Jul 2 00:43:44.865821 systemd[1]: Starting systemd-network-generator.service... Jul 2 00:43:44.865850 systemd[1]: Starting systemd-remount-fs.service... Jul 2 00:43:44.865882 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 00:43:44.865917 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:43:44.865947 systemd[1]: Stopped verity-setup.service. Jul 2 00:43:44.865976 systemd[1]: Mounted dev-hugepages.mount. Jul 2 00:43:44.866006 systemd[1]: Mounted dev-mqueue.mount. Jul 2 00:43:44.866036 systemd[1]: Mounted media.mount. Jul 2 00:43:44.866066 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 00:43:44.866095 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 00:43:44.866123 systemd[1]: Mounted tmp.mount. Jul 2 00:43:44.866155 systemd-journald[1490]: Journal started Jul 2 00:43:44.866294 systemd-journald[1490]: Runtime Journal (/run/log/journal/ec276d494ad9f9e2909c4fa0d0bac2fa) is 8.0M, max 75.4M, 67.4M free. Jul 2 00:43:39.600000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:43:39.851000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 00:43:39.851000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 00:43:39.851000 audit: BPF prog-id=10 op=LOAD Jul 2 00:43:39.851000 audit: BPF prog-id=10 op=UNLOAD Jul 2 00:43:39.851000 audit: BPF prog-id=11 op=LOAD Jul 2 00:43:39.851000 audit: BPF prog-id=11 op=UNLOAD Jul 2 00:43:40.105000 audit[1415]: AVC avc: denied { associate } for pid=1415 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 00:43:40.105000 audit[1415]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458c4 a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1398 pid=1415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:43:40.105000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 00:43:40.108000 audit[1415]: AVC avc: denied { associate } for pid=1415 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 00:43:40.108000 audit[1415]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001459a9 a2=1ed a3=0 items=2 ppid=1398 pid=1415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:43:40.108000 audit: CWD cwd="/" Jul 2 00:43:40.108000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 00:43:40.108000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 00:43:40.108000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 00:43:44.521000 audit: BPF prog-id=12 op=LOAD Jul 2 00:43:44.521000 audit: BPF prog-id=3 op=UNLOAD Jul 2 00:43:44.523000 audit: BPF prog-id=13 op=LOAD Jul 2 00:43:44.525000 audit: BPF prog-id=14 op=LOAD Jul 2 00:43:44.525000 audit: BPF prog-id=4 op=UNLOAD Jul 2 00:43:44.525000 audit: BPF prog-id=5 op=UNLOAD Jul 2 00:43:44.529000 audit: BPF prog-id=15 op=LOAD Jul 2 00:43:44.530000 audit: BPF prog-id=12 op=UNLOAD Jul 2 00:43:44.532000 audit: BPF prog-id=16 op=LOAD Jul 2 00:43:44.534000 audit: BPF prog-id=17 op=LOAD Jul 2 00:43:44.534000 audit: BPF prog-id=13 op=UNLOAD Jul 2 00:43:44.534000 audit: BPF prog-id=14 op=UNLOAD Jul 2 00:43:44.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.544000 audit: BPF prog-id=15 op=UNLOAD Jul 2 00:43:44.870883 systemd[1]: Started systemd-journald.service. Jul 2 00:43:44.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.779000 audit: BPF prog-id=18 op=LOAD Jul 2 00:43:44.779000 audit: BPF prog-id=19 op=LOAD Jul 2 00:43:44.779000 audit: BPF prog-id=20 op=LOAD Jul 2 00:43:44.779000 audit: BPF prog-id=16 op=UNLOAD Jul 2 00:43:44.779000 audit: BPF prog-id=17 op=UNLOAD Jul 2 00:43:44.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.855000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 00:43:44.855000 audit[1490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffca22a2b0 a2=4000 a3=1 items=0 ppid=1 pid=1490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:43:44.855000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 00:43:44.520452 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:43:44.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:40.078834 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 00:43:44.537464 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:43:40.090099 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:40Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 00:43:40.090151 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:40Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 00:43:44.873055 systemd[1]: Finished kmod-static-nodes.service. Jul 2 00:43:40.090249 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:40Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 00:43:40.090276 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:40Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 00:43:40.090343 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:40Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 00:43:40.090374 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:40Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 00:43:44.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:40.090785 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:40Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 00:43:40.090862 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:40Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 00:43:40.090898 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:40Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 00:43:40.092478 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:40Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 00:43:40.092783 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:40Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 00:43:44.876509 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:43:40.092830 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:40Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 00:43:44.876831 systemd[1]: Finished modprobe@configfs.service. Jul 2 00:43:40.092870 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:40Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 00:43:40.092917 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:40Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 00:43:40.092955 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:40Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 00:43:44.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.571854 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:43Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 00:43:43.572399 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:43Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 00:43:43.572642 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:43Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 00:43:43.573133 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:43Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 00:43:43.573264 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:43Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 00:43:43.573405 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-07-02T00:43:43Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 00:43:44.880864 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:43:44.881187 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:43:44.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.886564 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:43:44.886841 systemd[1]: Finished modprobe@drm.service. Jul 2 00:43:44.888901 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:43:44.889185 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:43:44.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.892058 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:43:44.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.892445 systemd[1]: Finished modprobe@fuse.service. Jul 2 00:43:44.895018 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:43:44.895559 systemd[1]: Finished modprobe@loop.service. Jul 2 00:43:44.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.898198 systemd[1]: Finished systemd-modules-load.service. Jul 2 00:43:44.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.900334 systemd[1]: Finished systemd-network-generator.service. Jul 2 00:43:44.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.902567 systemd[1]: Finished systemd-remount-fs.service. Jul 2 00:43:44.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.905086 systemd[1]: Reached target network-pre.target. Jul 2 00:43:44.909338 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 00:43:44.914016 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 00:43:44.920845 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:43:44.923959 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 00:43:44.927810 systemd[1]: Starting systemd-journal-flush.service... Jul 2 00:43:44.929448 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:43:44.931679 systemd[1]: Starting systemd-random-seed.service... Jul 2 00:43:44.934407 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 00:43:44.936723 systemd[1]: Starting systemd-sysctl.service... Jul 2 00:43:44.941164 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 00:43:44.943676 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 00:43:44.959563 systemd-journald[1490]: Time spent on flushing to /var/log/journal/ec276d494ad9f9e2909c4fa0d0bac2fa is 82.731ms for 1135 entries. Jul 2 00:43:44.959563 systemd-journald[1490]: System Journal (/var/log/journal/ec276d494ad9f9e2909c4fa0d0bac2fa) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:43:45.060845 systemd-journald[1490]: Received client request to flush runtime journal. Jul 2 00:43:44.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:45.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.980454 systemd[1]: Finished systemd-random-seed.service. Jul 2 00:43:45.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.982301 systemd[1]: Reached target first-boot-complete.target. Jul 2 00:43:45.011432 systemd[1]: Finished systemd-sysctl.service. Jul 2 00:43:45.062382 systemd[1]: Finished systemd-journal-flush.service. Jul 2 00:43:45.102011 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 00:43:45.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:45.106411 systemd[1]: Starting systemd-sysusers.service... Jul 2 00:43:45.132953 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 00:43:45.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:45.137911 systemd[1]: Starting systemd-udev-settle.service... Jul 2 00:43:45.155371 udevadm[1534]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 00:43:45.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:45.246026 systemd[1]: Finished systemd-sysusers.service. Jul 2 00:43:46.001824 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 00:43:46.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.003000 audit: BPF prog-id=21 op=LOAD Jul 2 00:43:46.003000 audit: BPF prog-id=22 op=LOAD Jul 2 00:43:46.003000 audit: BPF prog-id=7 op=UNLOAD Jul 2 00:43:46.003000 audit: BPF prog-id=8 op=UNLOAD Jul 2 00:43:46.005807 systemd[1]: Starting systemd-udevd.service... Jul 2 00:43:46.044967 systemd-udevd[1535]: Using default interface naming scheme 'v252'. Jul 2 00:43:46.105590 systemd[1]: Started systemd-udevd.service. Jul 2 00:43:46.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.107000 audit: BPF prog-id=23 op=LOAD Jul 2 00:43:46.110470 systemd[1]: Starting systemd-networkd.service... Jul 2 00:43:46.134000 audit: BPF prog-id=24 op=LOAD Jul 2 00:43:46.134000 audit: BPF prog-id=25 op=LOAD Jul 2 00:43:46.134000 audit: BPF prog-id=26 op=LOAD Jul 2 00:43:46.137029 systemd[1]: Starting systemd-userdbd.service... Jul 2 00:43:46.194826 (udev-worker)[1548]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:43:46.202520 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 00:43:46.209822 systemd[1]: Started systemd-userdbd.service. Jul 2 00:43:46.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.341667 systemd-networkd[1540]: lo: Link UP Jul 2 00:43:46.342130 systemd-networkd[1540]: lo: Gained carrier Jul 2 00:43:46.343318 systemd-networkd[1540]: Enumeration completed Jul 2 00:43:46.343614 systemd[1]: Started systemd-networkd.service. Jul 2 00:43:46.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.347147 systemd-networkd[1540]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:43:46.347594 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 00:43:46.354241 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 00:43:46.354887 systemd-networkd[1540]: eth0: Link UP Jul 2 00:43:46.355429 systemd-networkd[1540]: eth0: Gained carrier Jul 2 00:43:46.368475 systemd-networkd[1540]: eth0: DHCPv4 address 172.31.27.155/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 00:43:46.519261 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1562) Jul 2 00:43:46.630693 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 00:43:46.633090 systemd[1]: Finished systemd-udev-settle.service. Jul 2 00:43:46.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.637157 systemd[1]: Starting lvm2-activation-early.service... Jul 2 00:43:46.720934 lvm[1654]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:43:46.765962 systemd[1]: Finished lvm2-activation-early.service. Jul 2 00:43:46.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.767816 systemd[1]: Reached target cryptsetup.target. Jul 2 00:43:46.771683 systemd[1]: Starting lvm2-activation.service... Jul 2 00:43:46.779998 lvm[1655]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:43:46.812898 systemd[1]: Finished lvm2-activation.service. Jul 2 00:43:46.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.814703 systemd[1]: Reached target local-fs-pre.target. Jul 2 00:43:46.816301 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:43:46.816355 systemd[1]: Reached target local-fs.target. Jul 2 00:43:46.817905 systemd[1]: Reached target machines.target. Jul 2 00:43:46.821566 systemd[1]: Starting ldconfig.service... Jul 2 00:43:46.823626 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:43:46.823763 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:43:46.826847 systemd[1]: Starting systemd-boot-update.service... Jul 2 00:43:46.830502 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 00:43:46.835811 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 00:43:46.840340 systemd[1]: Starting systemd-sysext.service... Jul 2 00:43:46.846096 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1657 (bootctl) Jul 2 00:43:46.848864 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 00:43:46.875518 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 00:43:46.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.895240 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 00:43:46.904888 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 00:43:46.905280 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 00:43:46.923288 kernel: loop0: detected capacity change from 0 to 194512 Jul 2 00:43:46.976246 systemd-fsck[1666]: fsck.fat 4.2 (2021-01-31) Jul 2 00:43:46.976246 systemd-fsck[1666]: /dev/nvme0n1p1: 236 files, 117047/258078 clusters Jul 2 00:43:46.978764 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 00:43:46.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.983204 systemd[1]: Mounting boot.mount... Jul 2 00:43:47.002801 systemd[1]: Mounted boot.mount. Jul 2 00:43:47.028102 systemd[1]: Finished systemd-boot-update.service. Jul 2 00:43:47.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:47.405247 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:43:47.468253 kernel: loop1: detected capacity change from 0 to 194512 Jul 2 00:43:47.482636 (sd-sysext)[1683]: Using extensions 'kubernetes'. Jul 2 00:43:47.484009 (sd-sysext)[1683]: Merged extensions into '/usr'. Jul 2 00:43:47.522896 systemd[1]: Mounting usr-share-oem.mount... Jul 2 00:43:47.524750 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:43:47.525398 systemd-networkd[1540]: eth0: Gained IPv6LL Jul 2 00:43:47.533670 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:43:47.537448 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:43:47.541440 systemd[1]: Starting modprobe@loop.service... Jul 2 00:43:47.543835 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:43:47.544199 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:43:47.551148 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 00:43:47.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:47.554139 systemd[1]: Mounted usr-share-oem.mount. Jul 2 00:43:47.556704 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:43:47.557023 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:43:47.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:47.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:47.560846 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:43:47.561294 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:43:47.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:47.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:47.564570 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:43:47.564958 systemd[1]: Finished modprobe@loop.service. Jul 2 00:43:47.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:47.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:47.568048 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:43:47.568359 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 00:43:47.570052 systemd[1]: Finished systemd-sysext.service. Jul 2 00:43:47.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:47.574404 systemd[1]: Starting ensure-sysext.service... Jul 2 00:43:47.578687 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 00:43:47.603674 systemd[1]: Reloading. Jul 2 00:43:47.672875 systemd-tmpfiles[1690]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 00:43:47.687620 systemd-tmpfiles[1690]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:43:47.696947 systemd-tmpfiles[1690]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:43:47.714929 /usr/lib/systemd/system-generators/torcx-generator[1709]: time="2024-07-02T00:43:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 00:43:47.723556 /usr/lib/systemd/system-generators/torcx-generator[1709]: time="2024-07-02T00:43:47Z" level=info msg="torcx already run" Jul 2 00:43:48.017932 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:43:48.018331 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:43:48.066599 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:43:48.082667 ldconfig[1656]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:43:48.210000 audit: BPF prog-id=27 op=LOAD Jul 2 00:43:48.210000 audit: BPF prog-id=24 op=UNLOAD Jul 2 00:43:48.210000 audit: BPF prog-id=28 op=LOAD Jul 2 00:43:48.210000 audit: BPF prog-id=29 op=LOAD Jul 2 00:43:48.210000 audit: BPF prog-id=25 op=UNLOAD Jul 2 00:43:48.210000 audit: BPF prog-id=26 op=UNLOAD Jul 2 00:43:48.216000 audit: BPF prog-id=30 op=LOAD Jul 2 00:43:48.216000 audit: BPF prog-id=31 op=LOAD Jul 2 00:43:48.216000 audit: BPF prog-id=21 op=UNLOAD Jul 2 00:43:48.216000 audit: BPF prog-id=22 op=UNLOAD Jul 2 00:43:48.217000 audit: BPF prog-id=32 op=LOAD Jul 2 00:43:48.218000 audit: BPF prog-id=23 op=UNLOAD Jul 2 00:43:48.221000 audit: BPF prog-id=33 op=LOAD Jul 2 00:43:48.221000 audit: BPF prog-id=18 op=UNLOAD Jul 2 00:43:48.221000 audit: BPF prog-id=34 op=LOAD Jul 2 00:43:48.221000 audit: BPF prog-id=35 op=LOAD Jul 2 00:43:48.222000 audit: BPF prog-id=19 op=UNLOAD Jul 2 00:43:48.222000 audit: BPF prog-id=20 op=UNLOAD Jul 2 00:43:48.243572 systemd[1]: Finished ldconfig.service. Jul 2 00:43:48.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:48.247448 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 00:43:48.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:48.257875 systemd[1]: Starting audit-rules.service... Jul 2 00:43:48.261613 systemd[1]: Starting clean-ca-certificates.service... Jul 2 00:43:48.265888 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 00:43:48.268000 audit: BPF prog-id=36 op=LOAD Jul 2 00:43:48.271592 systemd[1]: Starting systemd-resolved.service... Jul 2 00:43:48.275000 audit: BPF prog-id=37 op=LOAD Jul 2 00:43:48.278719 systemd[1]: Starting systemd-timesyncd.service... Jul 2 00:43:48.283781 systemd[1]: Starting systemd-update-utmp.service... Jul 2 00:43:48.299835 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:43:48.302410 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:43:48.306842 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:43:48.311695 systemd[1]: Starting modprobe@loop.service... Jul 2 00:43:48.314502 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:43:48.314828 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:43:48.321671 systemd[1]: Finished clean-ca-certificates.service. Jul 2 00:43:48.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:48.324396 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:43:48.324678 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:43:48.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:48.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:48.327014 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:43:48.330315 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:43:48.335465 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:43:48.337093 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:43:48.337550 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:43:48.337824 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:43:48.341000 audit[1771]: SYSTEM_BOOT pid=1771 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 00:43:48.350433 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:43:48.354083 systemd[1]: Starting modprobe@drm.service... Jul 2 00:43:48.356482 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:43:48.356928 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:43:48.357455 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:43:48.360412 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:43:48.361551 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:43:48.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:48.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:48.364534 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:43:48.364863 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:43:48.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:48.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:48.367762 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:43:48.370579 systemd[1]: Finished systemd-update-utmp.service. Jul 2 00:43:48.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:48.378496 systemd[1]: Finished ensure-sysext.service. Jul 2 00:43:48.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:48.381996 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:43:48.382379 systemd[1]: Finished modprobe@loop.service. Jul 2 00:43:48.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:48.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:48.384390 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 00:43:48.392907 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:43:48.393258 systemd[1]: Finished modprobe@drm.service. Jul 2 00:43:48.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:48.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:48.405831 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 00:43:48.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:48.410327 systemd[1]: Starting systemd-update-done.service... Jul 2 00:43:48.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:48.436914 systemd[1]: Finished systemd-update-done.service. Jul 2 00:43:48.498783 systemd[1]: Started systemd-timesyncd.service. Jul 2 00:43:48.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:48.500595 systemd[1]: Reached target time-set.target. Jul 2 00:43:48.510747 systemd-resolved[1769]: Positive Trust Anchors: Jul 2 00:43:48.510779 systemd-resolved[1769]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:43:48.510831 systemd-resolved[1769]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 00:43:48.537000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 00:43:48.537000 audit[1794]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe11190b0 a2=420 a3=0 items=0 ppid=1766 pid=1794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:43:48.537000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 00:43:48.539091 augenrules[1794]: No rules Jul 2 00:43:48.540882 systemd[1]: Finished audit-rules.service. Jul 2 00:43:48.555701 systemd-resolved[1769]: Defaulting to hostname 'linux'. Jul 2 00:43:48.558922 systemd[1]: Started systemd-resolved.service. Jul 2 00:43:48.560618 systemd[1]: Reached target network.target. Jul 2 00:43:48.562176 systemd[1]: Reached target network-online.target. Jul 2 00:43:48.563816 systemd[1]: Reached target nss-lookup.target. Jul 2 00:43:48.565344 systemd[1]: Reached target sysinit.target. Jul 2 00:43:48.567013 systemd[1]: Started motdgen.path. Jul 2 00:43:48.568445 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 00:43:48.570836 systemd[1]: Started logrotate.timer. Jul 2 00:43:48.572903 systemd[1]: Started mdadm.timer. Jul 2 00:43:48.574279 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 00:43:48.576418 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:43:48.576478 systemd[1]: Reached target paths.target. Jul 2 00:43:48.577931 systemd[1]: Reached target timers.target. Jul 2 00:43:48.579809 systemd[1]: Listening on dbus.socket. Jul 2 00:43:48.583403 systemd[1]: Starting docker.socket... Jul 2 00:43:48.594298 systemd[1]: Listening on sshd.socket. Jul 2 00:43:48.596199 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:43:48.597194 systemd[1]: Listening on docker.socket. Jul 2 00:43:48.599151 systemd[1]: Reached target sockets.target. Jul 2 00:43:48.601093 systemd[1]: Reached target basic.target. Jul 2 00:43:48.602722 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 00:43:48.602915 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 00:43:48.605122 systemd[1]: Started amazon-ssm-agent.service. Jul 2 00:43:48.608203 systemd-timesyncd[1770]: Contacted time server 192.48.105.15:123 (0.flatcar.pool.ntp.org). Jul 2 00:43:48.608332 systemd-timesyncd[1770]: Initial clock synchronization to Tue 2024-07-02 00:43:48.287162 UTC. Jul 2 00:43:48.613686 systemd[1]: Starting containerd.service... Jul 2 00:43:48.617646 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 2 00:43:48.621816 systemd[1]: Starting dbus.service... Jul 2 00:43:48.628254 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 00:43:48.632276 systemd[1]: Starting extend-filesystems.service... Jul 2 00:43:48.633801 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 00:43:48.636703 systemd[1]: Starting kubelet.service... Jul 2 00:43:48.642123 systemd[1]: Starting motdgen.service... Jul 2 00:43:48.646540 systemd[1]: Started nvidia.service. Jul 2 00:43:48.650880 systemd[1]: Starting prepare-helm.service... Jul 2 00:43:48.655145 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 00:43:48.660652 systemd[1]: Starting sshd-keygen.service... Jul 2 00:43:48.672401 systemd[1]: Starting systemd-logind.service... Jul 2 00:43:48.673971 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:43:48.674159 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:43:48.675155 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:43:48.676803 systemd[1]: Starting update-engine.service... Jul 2 00:43:48.684656 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 00:43:48.729834 jq[1821]: true Jul 2 00:43:48.777523 jq[1805]: false Jul 2 00:43:48.789147 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:43:48.789552 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 00:43:48.790320 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:43:48.790614 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 00:43:48.817336 tar[1824]: linux-arm64/helm Jul 2 00:43:48.833729 extend-filesystems[1806]: Found loop1 Jul 2 00:43:48.833729 extend-filesystems[1806]: Found nvme0n1 Jul 2 00:43:48.833729 extend-filesystems[1806]: Found nvme0n1p1 Jul 2 00:43:48.833729 extend-filesystems[1806]: Found nvme0n1p2 Jul 2 00:43:48.833729 extend-filesystems[1806]: Found nvme0n1p3 Jul 2 00:43:48.833729 extend-filesystems[1806]: Found usr Jul 2 00:43:48.833729 extend-filesystems[1806]: Found nvme0n1p4 Jul 2 00:43:48.833729 extend-filesystems[1806]: Found nvme0n1p6 Jul 2 00:43:48.833729 extend-filesystems[1806]: Found nvme0n1p7 Jul 2 00:43:48.833729 extend-filesystems[1806]: Found nvme0n1p9 Jul 2 00:43:48.833729 extend-filesystems[1806]: Checking size of /dev/nvme0n1p9 Jul 2 00:43:48.913361 dbus-daemon[1804]: [system] SELinux support is enabled Jul 2 00:43:48.913656 systemd[1]: Started dbus.service. Jul 2 00:43:48.918936 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:43:48.918980 systemd[1]: Reached target system-config.target. Jul 2 00:43:48.920706 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:43:48.920748 systemd[1]: Reached target user-config.target. Jul 2 00:43:48.952624 jq[1834]: true Jul 2 00:43:48.957237 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:43:48.957668 systemd[1]: Finished motdgen.service. Jul 2 00:43:48.979334 dbus-daemon[1804]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1540 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 00:43:48.985890 systemd[1]: Starting systemd-hostnamed.service... Jul 2 00:43:48.992962 extend-filesystems[1806]: Resized partition /dev/nvme0n1p9 Jul 2 00:43:49.004584 env[1826]: time="2024-07-02T00:43:49.004506451Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 00:43:49.014387 extend-filesystems[1857]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 00:43:49.022949 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:43:49.025296 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 00:43:49.033738 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 2 00:43:49.104253 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 2 00:43:49.153460 extend-filesystems[1857]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 2 00:43:49.153460 extend-filesystems[1857]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:43:49.153460 extend-filesystems[1857]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 2 00:43:49.159891 extend-filesystems[1806]: Resized filesystem in /dev/nvme0n1p9 Jul 2 00:43:49.177103 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:43:49.177499 systemd[1]: Finished extend-filesystems.service. Jul 2 00:43:49.198751 env[1826]: time="2024-07-02T00:43:49.198629329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:43:49.199143 env[1826]: time="2024-07-02T00:43:49.199109092Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:43:49.201898 env[1826]: time="2024-07-02T00:43:49.201739699Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:43:49.202103 env[1826]: time="2024-07-02T00:43:49.202072439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:43:49.202341 update_engine[1819]: I0702 00:43:49.201592 1819 main.cc:92] Flatcar Update Engine starting Jul 2 00:43:49.203329 env[1826]: time="2024-07-02T00:43:49.203279121Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:43:49.204391 env[1826]: time="2024-07-02T00:43:49.204334380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:43:49.204593 env[1826]: time="2024-07-02T00:43:49.204558164Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:43:49.204748 env[1826]: time="2024-07-02T00:43:49.204717594Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:43:49.205123 env[1826]: time="2024-07-02T00:43:49.205087101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:43:49.214609 systemd[1]: Started update-engine.service. Jul 2 00:43:49.219339 systemd[1]: Started locksmithd.service. Jul 2 00:43:49.221599 update_engine[1819]: I0702 00:43:49.221541 1819 update_check_scheduler.cc:74] Next update check in 3m22s Jul 2 00:43:49.222493 env[1826]: time="2024-07-02T00:43:49.222347400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:43:49.223039 env[1826]: time="2024-07-02T00:43:49.222989334Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:43:49.225894 env[1826]: time="2024-07-02T00:43:49.225820573Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:43:49.226283 env[1826]: time="2024-07-02T00:43:49.226246280Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:43:49.226446 amazon-ssm-agent[1801]: 2024/07/02 00:43:49 Failed to load instance info from vault. RegistrationKey does not exist. Jul 2 00:43:49.226967 env[1826]: time="2024-07-02T00:43:49.226920893Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:43:49.228742 amazon-ssm-agent[1801]: Initializing new seelog logger Jul 2 00:43:49.238566 env[1826]: time="2024-07-02T00:43:49.238509361Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:43:49.238963 env[1826]: time="2024-07-02T00:43:49.238927292Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:43:49.239139 env[1826]: time="2024-07-02T00:43:49.239092804Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:43:49.239664 env[1826]: time="2024-07-02T00:43:49.239473000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:43:49.239897 env[1826]: time="2024-07-02T00:43:49.239851653Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:43:49.240071 env[1826]: time="2024-07-02T00:43:49.240032208Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:43:49.240248 env[1826]: time="2024-07-02T00:43:49.240194955Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:43:49.240967 env[1826]: time="2024-07-02T00:43:49.240876686Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:43:49.241359 env[1826]: time="2024-07-02T00:43:49.241309626Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 00:43:49.241605 env[1826]: time="2024-07-02T00:43:49.241559915Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:43:49.242155 env[1826]: time="2024-07-02T00:43:49.242099575Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:43:49.242942 env[1826]: time="2024-07-02T00:43:49.242892785Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:43:49.244737 env[1826]: time="2024-07-02T00:43:49.244677992Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:43:49.245636 env[1826]: time="2024-07-02T00:43:49.245584521Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:43:49.246638 env[1826]: time="2024-07-02T00:43:49.246579825Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:43:49.248464 env[1826]: time="2024-07-02T00:43:49.248409091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:43:49.248672 env[1826]: time="2024-07-02T00:43:49.248637529Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:43:49.248915 env[1826]: time="2024-07-02T00:43:49.248865540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:43:49.253468 env[1826]: time="2024-07-02T00:43:49.253385472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:43:49.257561 env[1826]: time="2024-07-02T00:43:49.257125498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:43:49.257971 env[1826]: time="2024-07-02T00:43:49.257908720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:43:49.258362 env[1826]: time="2024-07-02T00:43:49.258323760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:43:49.258549 env[1826]: time="2024-07-02T00:43:49.258517366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:43:49.258895 env[1826]: time="2024-07-02T00:43:49.258812681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:43:49.259288 env[1826]: time="2024-07-02T00:43:49.259243536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:43:49.259551 env[1826]: time="2024-07-02T00:43:49.259494125Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:43:49.260161 env[1826]: time="2024-07-02T00:43:49.260116800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:43:49.260398 env[1826]: time="2024-07-02T00:43:49.260365960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:43:49.260562 env[1826]: time="2024-07-02T00:43:49.260530481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:43:49.260742 env[1826]: time="2024-07-02T00:43:49.260693113Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:43:49.260931 env[1826]: time="2024-07-02T00:43:49.260880279Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 00:43:49.261081 env[1826]: time="2024-07-02T00:43:49.261050525Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:43:49.261314 env[1826]: time="2024-07-02T00:43:49.261262434Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 00:43:49.261562 env[1826]: time="2024-07-02T00:43:49.261503335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:43:49.262259 env[1826]: time="2024-07-02T00:43:49.262109516Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:43:49.263252 env[1826]: time="2024-07-02T00:43:49.262700780Z" level=info msg="Connect containerd service" Jul 2 00:43:49.263252 env[1826]: time="2024-07-02T00:43:49.262868457Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:43:49.264883 env[1826]: time="2024-07-02T00:43:49.264816779Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:43:49.267263 amazon-ssm-agent[1801]: New Seelog Logger Creation Complete Jul 2 00:43:49.267411 amazon-ssm-agent[1801]: 2024/07/02 00:43:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:43:49.267411 amazon-ssm-agent[1801]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:43:49.267749 amazon-ssm-agent[1801]: 2024/07/02 00:43:49 processing appconfig overrides Jul 2 00:43:49.268550 env[1826]: time="2024-07-02T00:43:49.268498256Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:43:49.270162 env[1826]: time="2024-07-02T00:43:49.270114565Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:43:49.272419 env[1826]: time="2024-07-02T00:43:49.272373972Z" level=info msg="containerd successfully booted in 0.286524s" Jul 2 00:43:49.272523 systemd[1]: Started containerd.service. Jul 2 00:43:49.294947 env[1826]: time="2024-07-02T00:43:49.294757354Z" level=info msg="Start subscribing containerd event" Jul 2 00:43:49.296981 env[1826]: time="2024-07-02T00:43:49.296135365Z" level=info msg="Start recovering state" Jul 2 00:43:49.298853 bash[1889]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:43:49.300720 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 00:43:49.303943 env[1826]: time="2024-07-02T00:43:49.303896933Z" level=info msg="Start event monitor" Jul 2 00:43:49.309921 env[1826]: time="2024-07-02T00:43:49.309841077Z" level=info msg="Start snapshots syncer" Jul 2 00:43:49.318291 env[1826]: time="2024-07-02T00:43:49.317898098Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:43:49.320635 env[1826]: time="2024-07-02T00:43:49.320581541Z" level=info msg="Start streaming server" Jul 2 00:43:49.321624 systemd[1]: nvidia.service: Deactivated successfully. Jul 2 00:43:49.473347 systemd-logind[1817]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 00:43:49.473398 systemd-logind[1817]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 2 00:43:49.477394 systemd-logind[1817]: New seat seat0. Jul 2 00:43:49.487170 systemd[1]: Started systemd-logind.service. Jul 2 00:43:49.549192 coreos-metadata[1803]: Jul 02 00:43:49.548 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 00:43:49.553700 coreos-metadata[1803]: Jul 02 00:43:49.553 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Jul 2 00:43:49.554777 coreos-metadata[1803]: Jul 02 00:43:49.554 INFO Fetch successful Jul 2 00:43:49.555065 coreos-metadata[1803]: Jul 02 00:43:49.554 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 00:43:49.556576 coreos-metadata[1803]: Jul 02 00:43:49.556 INFO Fetch successful Jul 2 00:43:49.559378 unknown[1803]: wrote ssh authorized keys file for user: core Jul 2 00:43:49.656156 update-ssh-keys[1936]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:43:49.656062 dbus-daemon[1804]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 00:43:49.656387 systemd[1]: Started systemd-hostnamed.service. Jul 2 00:43:49.659303 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 2 00:43:49.664405 dbus-daemon[1804]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1855 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 00:43:49.669064 systemd[1]: Starting polkit.service... Jul 2 00:43:49.710100 polkitd[1944]: Started polkitd version 121 Jul 2 00:43:49.742587 polkitd[1944]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 00:43:49.744842 polkitd[1944]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 00:43:49.748873 polkitd[1944]: Finished loading, compiling and executing 2 rules Jul 2 00:43:49.754260 dbus-daemon[1804]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 00:43:49.754509 systemd[1]: Started polkit.service. Jul 2 00:43:49.756934 polkitd[1944]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 00:43:49.804353 systemd-hostnamed[1855]: Hostname set to (transient) Jul 2 00:43:49.804507 systemd-resolved[1769]: System hostname changed to 'ip-172-31-27-155'. Jul 2 00:43:50.074353 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO Create new startup processor Jul 2 00:43:50.093842 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [LongRunningPluginsManager] registered plugins: {} Jul 2 00:43:50.098838 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO Initializing bookkeeping folders Jul 2 00:43:50.099030 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO removing the completed state files Jul 2 00:43:50.099801 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO Initializing bookkeeping folders for long running plugins Jul 2 00:43:50.099964 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Jul 2 00:43:50.100309 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO Initializing healthcheck folders for long running plugins Jul 2 00:43:50.100438 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO Initializing locations for inventory plugin Jul 2 00:43:50.100572 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO Initializing default location for custom inventory Jul 2 00:43:50.100696 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO Initializing default location for file inventory Jul 2 00:43:50.100805 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO Initializing default location for role inventory Jul 2 00:43:50.100933 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO Init the cloudwatchlogs publisher Jul 2 00:43:50.101990 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [instanceID=i-0f4b0bbf02bbd8206] Successfully loaded platform independent plugin aws:downloadContent Jul 2 00:43:50.102724 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [instanceID=i-0f4b0bbf02bbd8206] Successfully loaded platform independent plugin aws:runDocument Jul 2 00:43:50.102889 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [instanceID=i-0f4b0bbf02bbd8206] Successfully loaded platform independent plugin aws:runPowerShellScript Jul 2 00:43:50.103033 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [instanceID=i-0f4b0bbf02bbd8206] Successfully loaded platform independent plugin aws:updateSsmAgent Jul 2 00:43:50.103846 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [instanceID=i-0f4b0bbf02bbd8206] Successfully loaded platform independent plugin aws:refreshAssociation Jul 2 00:43:50.104190 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [instanceID=i-0f4b0bbf02bbd8206] Successfully loaded platform independent plugin aws:configurePackage Jul 2 00:43:50.104356 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [instanceID=i-0f4b0bbf02bbd8206] Successfully loaded platform independent plugin aws:softwareInventory Jul 2 00:43:50.105885 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [instanceID=i-0f4b0bbf02bbd8206] Successfully loaded platform independent plugin aws:configureDocker Jul 2 00:43:50.108390 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [instanceID=i-0f4b0bbf02bbd8206] Successfully loaded platform independent plugin aws:runDockerAction Jul 2 00:43:50.108616 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [instanceID=i-0f4b0bbf02bbd8206] Successfully loaded platform dependent plugin aws:runShellScript Jul 2 00:43:50.109130 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Jul 2 00:43:50.113302 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO OS: linux, Arch: arm64 Jul 2 00:43:50.119081 amazon-ssm-agent[1801]: datastore file /var/lib/amazon/ssm/i-0f4b0bbf02bbd8206/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Jul 2 00:43:50.187348 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [MessagingDeliveryService] Starting document processing engine... Jul 2 00:43:50.282641 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [MessagingDeliveryService] [EngineProcessor] Starting Jul 2 00:43:50.377728 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Jul 2 00:43:50.472138 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [MessagingDeliveryService] Starting message polling Jul 2 00:43:50.566903 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [MessagingDeliveryService] Starting send replies to MDS Jul 2 00:43:50.662595 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [instanceID=i-0f4b0bbf02bbd8206] Starting association polling Jul 2 00:43:50.757759 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Jul 2 00:43:50.766446 tar[1824]: linux-arm64/LICENSE Jul 2 00:43:50.768502 tar[1824]: linux-arm64/README.md Jul 2 00:43:50.779554 systemd[1]: Finished prepare-helm.service. Jul 2 00:43:50.853046 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [MessagingDeliveryService] [Association] Launching response handler Jul 2 00:43:50.948630 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Jul 2 00:43:50.990966 locksmithd[1887]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:43:51.043678 systemd[1]: Started kubelet.service. Jul 2 00:43:51.045283 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Jul 2 00:43:51.141123 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Jul 2 00:43:51.237374 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [HealthCheck] HealthCheck reporting agent health. Jul 2 00:43:51.333564 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [MessageGatewayService] Starting session document processing engine... Jul 2 00:43:51.430062 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [MessageGatewayService] [EngineProcessor] Starting Jul 2 00:43:51.526823 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Jul 2 00:43:51.623628 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0f4b0bbf02bbd8206, requestId: 866c8202-f535-4d02-a68e-5c8f4397e570 Jul 2 00:43:51.720724 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [OfflineService] Starting document processing engine... Jul 2 00:43:51.818049 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [OfflineService] [EngineProcessor] Starting Jul 2 00:43:51.877551 kubelet[2023]: E0702 00:43:51.877461 2023 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:43:51.881744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:43:51.882096 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:43:51.882562 systemd[1]: kubelet.service: Consumed 1.497s CPU time. Jul 2 00:43:51.915513 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [OfflineService] [EngineProcessor] Initial processing Jul 2 00:43:52.013195 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [OfflineService] Starting message polling Jul 2 00:43:52.111152 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [OfflineService] Starting send replies to MDS Jul 2 00:43:52.209111 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [LongRunningPluginsManager] starting long running plugin manager Jul 2 00:43:52.307411 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Jul 2 00:43:52.405917 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [MessageGatewayService] listening reply. Jul 2 00:43:52.504514 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Jul 2 00:43:52.603379 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [StartupProcessor] Executing startup processor tasks Jul 2 00:43:52.702468 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Jul 2 00:43:52.801665 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Jul 2 00:43:52.901078 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.5 Jul 2 00:43:53.000775 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0f4b0bbf02bbd8206?role=subscribe&stream=input Jul 2 00:43:53.100560 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0f4b0bbf02bbd8206?role=subscribe&stream=input Jul 2 00:43:53.200579 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [MessageGatewayService] Starting receiving message from control channel Jul 2 00:43:53.300818 amazon-ssm-agent[1801]: 2024-07-02 00:43:50 INFO [MessageGatewayService] [EngineProcessor] Initial processing Jul 2 00:43:53.401043 amazon-ssm-agent[1801]: 2024-07-02 00:43:53 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Jul 2 00:43:56.515945 systemd[1]: Created slice system-sshd.slice. Jul 2 00:43:57.551087 sshd_keygen[1836]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:43:57.591123 systemd[1]: Finished sshd-keygen.service. Jul 2 00:43:57.596399 systemd[1]: Starting issuegen.service... Jul 2 00:43:57.601986 systemd[1]: Started sshd@0-172.31.27.155:22-139.178.89.65:58762.service. Jul 2 00:43:57.613983 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:43:57.614388 systemd[1]: Finished issuegen.service. Jul 2 00:43:57.618953 systemd[1]: Starting systemd-user-sessions.service... Jul 2 00:43:57.633694 systemd[1]: Finished systemd-user-sessions.service. Jul 2 00:43:57.638140 systemd[1]: Started getty@tty1.service. Jul 2 00:43:57.642553 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 00:43:57.644871 systemd[1]: Reached target getty.target. Jul 2 00:43:57.646727 systemd[1]: Reached target multi-user.target. Jul 2 00:43:57.652754 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 00:43:57.670485 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 00:43:57.670885 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 00:43:57.675424 systemd[1]: Startup finished in 1.143s (kernel) + 8.857s (initrd) + 18.227s (userspace) = 28.228s. Jul 2 00:43:57.935235 sshd[2038]: Accepted publickey for core from 139.178.89.65 port 58762 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:43:57.939437 sshd[2038]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:57.957072 systemd[1]: Created slice user-500.slice. Jul 2 00:43:57.959863 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 00:43:57.967401 systemd-logind[1817]: New session 1 of user core. Jul 2 00:43:57.982188 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 00:43:57.986581 systemd[1]: Starting user@500.service... Jul 2 00:43:57.993256 (systemd)[2047]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:58.202438 systemd[2047]: Queued start job for default target default.target. Jul 2 00:43:58.203516 systemd[2047]: Reached target paths.target. Jul 2 00:43:58.203568 systemd[2047]: Reached target sockets.target. Jul 2 00:43:58.203599 systemd[2047]: Reached target timers.target. Jul 2 00:43:58.203627 systemd[2047]: Reached target basic.target. Jul 2 00:43:58.203719 systemd[2047]: Reached target default.target. Jul 2 00:43:58.203786 systemd[2047]: Startup finished in 198ms. Jul 2 00:43:58.204384 systemd[1]: Started user@500.service. Jul 2 00:43:58.206511 systemd[1]: Started session-1.scope. Jul 2 00:43:58.350992 systemd[1]: Started sshd@1-172.31.27.155:22-139.178.89.65:36730.service. Jul 2 00:43:58.521440 sshd[2056]: Accepted publickey for core from 139.178.89.65 port 36730 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:43:58.524588 sshd[2056]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:58.532587 systemd-logind[1817]: New session 2 of user core. Jul 2 00:43:58.533599 systemd[1]: Started session-2.scope. Jul 2 00:43:58.661454 sshd[2056]: pam_unix(sshd:session): session closed for user core Jul 2 00:43:58.666633 systemd-logind[1817]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:43:58.667187 systemd[1]: sshd@1-172.31.27.155:22-139.178.89.65:36730.service: Deactivated successfully. Jul 2 00:43:58.668482 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:43:58.670127 systemd-logind[1817]: Removed session 2. Jul 2 00:43:58.688804 systemd[1]: Started sshd@2-172.31.27.155:22-139.178.89.65:36744.service. Jul 2 00:43:58.854582 sshd[2062]: Accepted publickey for core from 139.178.89.65 port 36744 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:43:58.857612 sshd[2062]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:58.865309 systemd-logind[1817]: New session 3 of user core. Jul 2 00:43:58.866518 systemd[1]: Started session-3.scope. Jul 2 00:43:58.987359 sshd[2062]: pam_unix(sshd:session): session closed for user core Jul 2 00:43:58.991833 systemd[1]: sshd@2-172.31.27.155:22-139.178.89.65:36744.service: Deactivated successfully. Jul 2 00:43:58.993017 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:43:58.994192 systemd-logind[1817]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:43:58.997374 systemd-logind[1817]: Removed session 3. Jul 2 00:43:59.018671 systemd[1]: Started sshd@3-172.31.27.155:22-139.178.89.65:36754.service. Jul 2 00:43:59.193753 sshd[2068]: Accepted publickey for core from 139.178.89.65 port 36754 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:43:59.196778 sshd[2068]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:59.204725 systemd-logind[1817]: New session 4 of user core. Jul 2 00:43:59.205584 systemd[1]: Started session-4.scope. Jul 2 00:43:59.340737 sshd[2068]: pam_unix(sshd:session): session closed for user core Jul 2 00:43:59.345913 systemd-logind[1817]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:43:59.346574 systemd[1]: sshd@3-172.31.27.155:22-139.178.89.65:36754.service: Deactivated successfully. Jul 2 00:43:59.347892 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:43:59.349360 systemd-logind[1817]: Removed session 4. Jul 2 00:43:59.368265 systemd[1]: Started sshd@4-172.31.27.155:22-139.178.89.65:36756.service. Jul 2 00:43:59.537968 sshd[2074]: Accepted publickey for core from 139.178.89.65 port 36756 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:43:59.540469 sshd[2074]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:59.549104 systemd[1]: Started session-5.scope. Jul 2 00:43:59.549986 systemd-logind[1817]: New session 5 of user core. Jul 2 00:43:59.720368 sudo[2077]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:43:59.720905 sudo[2077]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:43:59.768899 systemd[1]: Starting docker.service... Jul 2 00:43:59.849288 env[2087]: time="2024-07-02T00:43:59.844071951Z" level=info msg="Starting up" Jul 2 00:43:59.851279 env[2087]: time="2024-07-02T00:43:59.851223008Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 00:43:59.851461 env[2087]: time="2024-07-02T00:43:59.851418657Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 00:43:59.851933 env[2087]: time="2024-07-02T00:43:59.851894684Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 00:43:59.852064 env[2087]: time="2024-07-02T00:43:59.852036357Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 00:43:59.855019 env[2087]: time="2024-07-02T00:43:59.854968545Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 00:43:59.855268 env[2087]: time="2024-07-02T00:43:59.855193700Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 00:43:59.855399 env[2087]: time="2024-07-02T00:43:59.855368868Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 00:43:59.855506 env[2087]: time="2024-07-02T00:43:59.855479480Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 00:44:00.299881 env[2087]: time="2024-07-02T00:44:00.299644545Z" level=info msg="Loading containers: start." Jul 2 00:44:00.585261 kernel: Initializing XFRM netlink socket Jul 2 00:44:00.675465 env[2087]: time="2024-07-02T00:44:00.675394587Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 00:44:00.679051 (udev-worker)[2097]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:44:00.806856 systemd-networkd[1540]: docker0: Link UP Jul 2 00:44:00.830582 env[2087]: time="2024-07-02T00:44:00.830533467Z" level=info msg="Loading containers: done." Jul 2 00:44:00.854640 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3164758167-merged.mount: Deactivated successfully. Jul 2 00:44:00.866608 env[2087]: time="2024-07-02T00:44:00.866530785Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:44:00.867612 env[2087]: time="2024-07-02T00:44:00.867567036Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 00:44:00.867829 env[2087]: time="2024-07-02T00:44:00.867794012Z" level=info msg="Daemon has completed initialization" Jul 2 00:44:00.895469 systemd[1]: Started docker.service. Jul 2 00:44:00.913236 env[2087]: time="2024-07-02T00:44:00.913098920Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:44:02.029689 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:44:02.030044 systemd[1]: Stopped kubelet.service. Jul 2 00:44:02.030121 systemd[1]: kubelet.service: Consumed 1.497s CPU time. Jul 2 00:44:02.033116 systemd[1]: Starting kubelet.service... Jul 2 00:44:02.097960 env[1826]: time="2024-07-02T00:44:02.097899406Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 00:44:02.533462 systemd[1]: Started kubelet.service. Jul 2 00:44:02.642181 kubelet[2218]: E0702 00:44:02.642065 2218 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:44:02.650839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:44:02.651150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:44:02.821994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2672895837.mount: Deactivated successfully. Jul 2 00:44:05.179002 env[1826]: time="2024-07-02T00:44:05.178939700Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:05.183697 env[1826]: time="2024-07-02T00:44:05.183602062Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:05.187471 env[1826]: time="2024-07-02T00:44:05.187400372Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:05.191301 env[1826]: time="2024-07-02T00:44:05.191201895Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:05.193384 env[1826]: time="2024-07-02T00:44:05.193321449Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\"" Jul 2 00:44:05.210836 env[1826]: time="2024-07-02T00:44:05.210761008Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 00:44:08.106165 env[1826]: time="2024-07-02T00:44:08.106102020Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:08.110974 env[1826]: time="2024-07-02T00:44:08.110912569Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:08.114625 env[1826]: time="2024-07-02T00:44:08.114536195Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:08.118502 env[1826]: time="2024-07-02T00:44:08.118442423Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:08.120433 env[1826]: time="2024-07-02T00:44:08.120369070Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\"" Jul 2 00:44:08.139447 env[1826]: time="2024-07-02T00:44:08.139359671Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 00:44:10.401541 env[1826]: time="2024-07-02T00:44:10.401483337Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:10.405195 env[1826]: time="2024-07-02T00:44:10.405135608Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:10.409372 env[1826]: time="2024-07-02T00:44:10.409289385Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:10.413588 env[1826]: time="2024-07-02T00:44:10.413526575Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:10.416753 env[1826]: time="2024-07-02T00:44:10.416680116Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\"" Jul 2 00:44:10.439501 env[1826]: time="2024-07-02T00:44:10.439450351Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 00:44:11.880412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount299927316.mount: Deactivated successfully. Jul 2 00:44:12.767176 env[1826]: time="2024-07-02T00:44:12.767117092Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:12.770374 env[1826]: time="2024-07-02T00:44:12.770291837Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:12.774844 env[1826]: time="2024-07-02T00:44:12.774779922Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:12.779577 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:44:12.779901 systemd[1]: Stopped kubelet.service. Jul 2 00:44:12.780705 env[1826]: time="2024-07-02T00:44:12.780650005Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:12.781384 env[1826]: time="2024-07-02T00:44:12.781328684Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\"" Jul 2 00:44:12.782481 systemd[1]: Starting kubelet.service... Jul 2 00:44:12.807362 env[1826]: time="2024-07-02T00:44:12.807260811Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 00:44:13.239592 systemd[1]: Started kubelet.service. Jul 2 00:44:13.342851 kubelet[2251]: E0702 00:44:13.342776 2251 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:44:13.349555 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:44:13.349916 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:44:13.567876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1208596679.mount: Deactivated successfully. Jul 2 00:44:15.792806 env[1826]: time="2024-07-02T00:44:15.792737269Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:15.800023 env[1826]: time="2024-07-02T00:44:15.799882060Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:15.804414 env[1826]: time="2024-07-02T00:44:15.804343651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:15.810355 env[1826]: time="2024-07-02T00:44:15.810269143Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jul 2 00:44:15.811009 env[1826]: time="2024-07-02T00:44:15.808353273Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:15.827373 env[1826]: time="2024-07-02T00:44:15.827293246Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:44:16.323175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount71411130.mount: Deactivated successfully. Jul 2 00:44:16.330334 env[1826]: time="2024-07-02T00:44:16.330266541Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:16.333544 env[1826]: time="2024-07-02T00:44:16.333485736Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:16.336364 env[1826]: time="2024-07-02T00:44:16.336304340Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:16.338875 env[1826]: time="2024-07-02T00:44:16.338804931Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:16.340018 env[1826]: time="2024-07-02T00:44:16.339956538Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 00:44:16.356556 env[1826]: time="2024-07-02T00:44:16.356499540Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 00:44:16.951617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1825990849.mount: Deactivated successfully. Jul 2 00:44:19.838892 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 00:44:19.961999 env[1826]: time="2024-07-02T00:44:19.961914224Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:19.965653 env[1826]: time="2024-07-02T00:44:19.965582655Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:19.969442 env[1826]: time="2024-07-02T00:44:19.969385184Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:19.973118 env[1826]: time="2024-07-02T00:44:19.973050557Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:19.975233 env[1826]: time="2024-07-02T00:44:19.975126983Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 00:44:23.243919 amazon-ssm-agent[1801]: 2024-07-02 00:44:23 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Jul 2 00:44:23.529672 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 00:44:23.530018 systemd[1]: Stopped kubelet.service. Jul 2 00:44:23.535834 systemd[1]: Starting kubelet.service... Jul 2 00:44:23.963014 systemd[1]: Started kubelet.service. Jul 2 00:44:24.073953 kubelet[2333]: E0702 00:44:24.073844 2333 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:44:24.081271 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:44:24.081607 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:44:27.048661 systemd[1]: Stopped kubelet.service. Jul 2 00:44:27.056858 systemd[1]: Starting kubelet.service... Jul 2 00:44:27.101999 systemd[1]: Reloading. Jul 2 00:44:27.281512 /usr/lib/systemd/system-generators/torcx-generator[2364]: time="2024-07-02T00:44:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 00:44:27.284973 /usr/lib/systemd/system-generators/torcx-generator[2364]: time="2024-07-02T00:44:27Z" level=info msg="torcx already run" Jul 2 00:44:27.488175 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:44:27.488624 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:44:27.531199 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:44:27.752048 systemd[1]: Started kubelet.service. Jul 2 00:44:27.755428 systemd[1]: Stopping kubelet.service... Jul 2 00:44:27.756561 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:44:27.757170 systemd[1]: Stopped kubelet.service. Jul 2 00:44:27.761515 systemd[1]: Starting kubelet.service... Jul 2 00:44:28.084420 systemd[1]: Started kubelet.service. Jul 2 00:44:28.164984 kubelet[2427]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:44:28.165613 kubelet[2427]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:44:28.165719 kubelet[2427]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:44:28.165998 kubelet[2427]: I0702 00:44:28.165932 2427 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:44:29.567583 kubelet[2427]: I0702 00:44:29.567542 2427 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 00:44:29.568268 kubelet[2427]: I0702 00:44:29.568241 2427 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:44:29.568721 kubelet[2427]: I0702 00:44:29.568695 2427 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 00:44:29.621070 kubelet[2427]: I0702 00:44:29.621016 2427 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:44:29.621809 kubelet[2427]: E0702 00:44:29.621779 2427 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.27.155:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.27.155:6443: connect: connection refused Jul 2 00:44:29.643957 kubelet[2427]: I0702 00:44:29.643882 2427 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:44:29.644512 kubelet[2427]: I0702 00:44:29.644482 2427 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:44:29.644827 kubelet[2427]: I0702 00:44:29.644792 2427 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:44:29.644993 kubelet[2427]: I0702 00:44:29.644840 2427 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:44:29.644993 kubelet[2427]: I0702 00:44:29.644863 2427 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:44:29.647556 kubelet[2427]: I0702 00:44:29.647510 2427 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:44:29.652578 kubelet[2427]: I0702 00:44:29.652530 2427 kubelet.go:396] "Attempting to sync node with API server" Jul 2 00:44:29.653171 kubelet[2427]: I0702 00:44:29.653130 2427 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:44:29.653531 kubelet[2427]: W0702 00:44:29.653467 2427 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.27.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-155&limit=500&resourceVersion=0": dial tcp 172.31.27.155:6443: connect: connection refused Jul 2 00:44:29.653673 kubelet[2427]: E0702 00:44:29.653649 2427 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.27.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-155&limit=500&resourceVersion=0": dial tcp 172.31.27.155:6443: connect: connection refused Jul 2 00:44:29.654319 kubelet[2427]: I0702 00:44:29.654275 2427 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:44:29.655304 kubelet[2427]: I0702 00:44:29.655275 2427 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:44:29.656573 kubelet[2427]: W0702 00:44:29.656491 2427 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.27.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.155:6443: connect: connection refused Jul 2 00:44:29.656815 kubelet[2427]: E0702 00:44:29.656790 2427 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.27.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.155:6443: connect: connection refused Jul 2 00:44:29.657081 kubelet[2427]: I0702 00:44:29.657055 2427 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 00:44:29.657737 kubelet[2427]: I0702 00:44:29.657708 2427 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:44:29.659072 kubelet[2427]: W0702 00:44:29.659038 2427 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:44:29.660363 kubelet[2427]: I0702 00:44:29.660330 2427 server.go:1256] "Started kubelet" Jul 2 00:44:29.677531 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 00:44:29.685737 kubelet[2427]: I0702 00:44:29.685694 2427 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:44:29.687154 kubelet[2427]: E0702 00:44:29.687097 2427 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.155:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.155:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-155.17de3eb875054b71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-155,UID:ip-172-31-27-155,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-155,},FirstTimestamp:2024-07-02 00:44:29.660294001 +0000 UTC m=+1.563741124,LastTimestamp:2024-07-02 00:44:29.660294001 +0000 UTC m=+1.563741124,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-155,}" Jul 2 00:44:29.688788 kubelet[2427]: I0702 00:44:29.688751 2427 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:44:29.690394 kubelet[2427]: I0702 00:44:29.690357 2427 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:44:29.697612 kubelet[2427]: I0702 00:44:29.697561 2427 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:44:29.697768 kubelet[2427]: I0702 00:44:29.697698 2427 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:44:29.698078 kubelet[2427]: I0702 00:44:29.698052 2427 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:44:29.698295 kubelet[2427]: I0702 00:44:29.691292 2427 server.go:461] "Adding debug handlers to kubelet server" Jul 2 00:44:29.700078 kubelet[2427]: I0702 00:44:29.695792 2427 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:44:29.700742 kubelet[2427]: I0702 00:44:29.695827 2427 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:44:29.700916 kubelet[2427]: W0702 00:44:29.697110 2427 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.27.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.155:6443: connect: connection refused Jul 2 00:44:29.701157 kubelet[2427]: E0702 00:44:29.701134 2427 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.27.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.155:6443: connect: connection refused Jul 2 00:44:29.701343 kubelet[2427]: E0702 00:44:29.697295 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-155?timeout=10s\": dial tcp 172.31.27.155:6443: connect: connection refused" interval="200ms" Jul 2 00:44:29.701567 kubelet[2427]: E0702 00:44:29.697466 2427 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:44:29.701662 kubelet[2427]: I0702 00:44:29.701517 2427 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:44:29.702387 kubelet[2427]: I0702 00:44:29.702344 2427 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:44:29.725438 kubelet[2427]: I0702 00:44:29.725380 2427 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:44:29.725438 kubelet[2427]: I0702 00:44:29.725430 2427 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:44:29.725665 kubelet[2427]: I0702 00:44:29.725465 2427 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:44:29.736758 kubelet[2427]: I0702 00:44:29.736703 2427 policy_none.go:49] "None policy: Start" Jul 2 00:44:29.738087 kubelet[2427]: I0702 00:44:29.738039 2427 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:44:29.738292 kubelet[2427]: I0702 00:44:29.738114 2427 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:44:29.762941 systemd[1]: Created slice kubepods.slice. Jul 2 00:44:29.769465 kubelet[2427]: I0702 00:44:29.769428 2427 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:44:29.773609 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 00:44:29.776382 kubelet[2427]: I0702 00:44:29.776349 2427 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:44:29.777554 kubelet[2427]: I0702 00:44:29.777515 2427 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:44:29.778912 kubelet[2427]: I0702 00:44:29.777860 2427 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 00:44:29.780666 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 00:44:29.783161 kubelet[2427]: E0702 00:44:29.783116 2427 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:44:29.783842 kubelet[2427]: W0702 00:44:29.783772 2427 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.27.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.155:6443: connect: connection refused Jul 2 00:44:29.783975 kubelet[2427]: E0702 00:44:29.783862 2427 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.27.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.155:6443: connect: connection refused Jul 2 00:44:29.790374 kubelet[2427]: I0702 00:44:29.790320 2427 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:44:29.790771 kubelet[2427]: I0702 00:44:29.790733 2427 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:44:29.794321 kubelet[2427]: E0702 00:44:29.794286 2427 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-27-155\" not found" Jul 2 00:44:29.798906 kubelet[2427]: I0702 00:44:29.798842 2427 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-155" Jul 2 00:44:29.799526 kubelet[2427]: E0702 00:44:29.799499 2427 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.155:6443/api/v1/nodes\": dial tcp 172.31.27.155:6443: connect: connection refused" node="ip-172-31-27-155" Jul 2 00:44:29.885281 kubelet[2427]: I0702 00:44:29.884307 2427 topology_manager.go:215] "Topology Admit Handler" podUID="a2c912ca61fc1789225157791c98ea13" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-27-155" Jul 2 00:44:29.889285 kubelet[2427]: I0702 00:44:29.888043 2427 topology_manager.go:215] "Topology Admit Handler" podUID="2d7555ba1848484cfc46bd476cfdc6df" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-27-155" Jul 2 00:44:29.891431 kubelet[2427]: I0702 00:44:29.891397 2427 topology_manager.go:215] "Topology Admit Handler" podUID="417e82181e446d53d6100ce808504345" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-27-155" Jul 2 00:44:29.903750 kubelet[2427]: E0702 00:44:29.903324 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-155?timeout=10s\": dial tcp 172.31.27.155:6443: connect: connection refused" interval="400ms" Jul 2 00:44:29.906322 systemd[1]: Created slice kubepods-burstable-pod2d7555ba1848484cfc46bd476cfdc6df.slice. Jul 2 00:44:29.921407 systemd[1]: Created slice kubepods-burstable-poda2c912ca61fc1789225157791c98ea13.slice. Jul 2 00:44:29.931416 systemd[1]: Created slice kubepods-burstable-pod417e82181e446d53d6100ce808504345.slice. Jul 2 00:44:30.002535 kubelet[2427]: I0702 00:44:30.002469 2427 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a2c912ca61fc1789225157791c98ea13-ca-certs\") pod \"kube-apiserver-ip-172-31-27-155\" (UID: \"a2c912ca61fc1789225157791c98ea13\") " pod="kube-system/kube-apiserver-ip-172-31-27-155" Jul 2 00:44:30.002698 kubelet[2427]: I0702 00:44:30.002546 2427 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2d7555ba1848484cfc46bd476cfdc6df-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-155\" (UID: \"2d7555ba1848484cfc46bd476cfdc6df\") " pod="kube-system/kube-controller-manager-ip-172-31-27-155" Jul 2 00:44:30.002698 kubelet[2427]: I0702 00:44:30.002595 2427 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2d7555ba1848484cfc46bd476cfdc6df-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-155\" (UID: \"2d7555ba1848484cfc46bd476cfdc6df\") " pod="kube-system/kube-controller-manager-ip-172-31-27-155" Jul 2 00:44:30.002698 kubelet[2427]: I0702 00:44:30.002640 2427 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2d7555ba1848484cfc46bd476cfdc6df-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-155\" (UID: \"2d7555ba1848484cfc46bd476cfdc6df\") " pod="kube-system/kube-controller-manager-ip-172-31-27-155" Jul 2 00:44:30.002698 kubelet[2427]: I0702 00:44:30.002685 2427 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/417e82181e446d53d6100ce808504345-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-155\" (UID: \"417e82181e446d53d6100ce808504345\") " pod="kube-system/kube-scheduler-ip-172-31-27-155" Jul 2 00:44:30.002962 kubelet[2427]: I0702 00:44:30.002733 2427 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a2c912ca61fc1789225157791c98ea13-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-155\" (UID: \"a2c912ca61fc1789225157791c98ea13\") " pod="kube-system/kube-apiserver-ip-172-31-27-155" Jul 2 00:44:30.002962 kubelet[2427]: I0702 00:44:30.002792 2427 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a2c912ca61fc1789225157791c98ea13-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-155\" (UID: \"a2c912ca61fc1789225157791c98ea13\") " pod="kube-system/kube-apiserver-ip-172-31-27-155" Jul 2 00:44:30.002962 kubelet[2427]: I0702 00:44:30.002842 2427 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2d7555ba1848484cfc46bd476cfdc6df-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-155\" (UID: \"2d7555ba1848484cfc46bd476cfdc6df\") " pod="kube-system/kube-controller-manager-ip-172-31-27-155" Jul 2 00:44:30.002962 kubelet[2427]: I0702 00:44:30.002913 2427 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d7555ba1848484cfc46bd476cfdc6df-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-155\" (UID: \"2d7555ba1848484cfc46bd476cfdc6df\") " pod="kube-system/kube-controller-manager-ip-172-31-27-155" Jul 2 00:44:30.003287 kubelet[2427]: I0702 00:44:30.003241 2427 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-155" Jul 2 00:44:30.003739 kubelet[2427]: E0702 00:44:30.003680 2427 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.155:6443/api/v1/nodes\": dial tcp 172.31.27.155:6443: connect: connection refused" node="ip-172-31-27-155" Jul 2 00:44:30.220229 env[1826]: time="2024-07-02T00:44:30.219644725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-155,Uid:2d7555ba1848484cfc46bd476cfdc6df,Namespace:kube-system,Attempt:0,}" Jul 2 00:44:30.229445 env[1826]: time="2024-07-02T00:44:30.229376518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-155,Uid:a2c912ca61fc1789225157791c98ea13,Namespace:kube-system,Attempt:0,}" Jul 2 00:44:30.235945 env[1826]: time="2024-07-02T00:44:30.235890318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-155,Uid:417e82181e446d53d6100ce808504345,Namespace:kube-system,Attempt:0,}" Jul 2 00:44:30.304658 kubelet[2427]: E0702 00:44:30.304601 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-155?timeout=10s\": dial tcp 172.31.27.155:6443: connect: connection refused" interval="800ms" Jul 2 00:44:30.406379 kubelet[2427]: I0702 00:44:30.406006 2427 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-155" Jul 2 00:44:30.406556 kubelet[2427]: E0702 00:44:30.406495 2427 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.155:6443/api/v1/nodes\": dial tcp 172.31.27.155:6443: connect: connection refused" node="ip-172-31-27-155" Jul 2 00:44:30.537067 kubelet[2427]: W0702 00:44:30.536965 2427 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.27.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.155:6443: connect: connection refused Jul 2 00:44:30.537067 kubelet[2427]: E0702 00:44:30.537066 2427 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.27.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.155:6443: connect: connection refused Jul 2 00:44:30.623728 kubelet[2427]: W0702 00:44:30.623602 2427 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.27.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-155&limit=500&resourceVersion=0": dial tcp 172.31.27.155:6443: connect: connection refused Jul 2 00:44:30.623728 kubelet[2427]: E0702 00:44:30.623692 2427 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.27.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-155&limit=500&resourceVersion=0": dial tcp 172.31.27.155:6443: connect: connection refused Jul 2 00:44:30.749376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2232099136.mount: Deactivated successfully. Jul 2 00:44:30.764451 env[1826]: time="2024-07-02T00:44:30.764381514Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:30.765258 kubelet[2427]: W0702 00:44:30.765069 2427 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.27.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.155:6443: connect: connection refused Jul 2 00:44:30.765258 kubelet[2427]: E0702 00:44:30.765136 2427 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.27.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.155:6443: connect: connection refused Jul 2 00:44:30.766778 env[1826]: time="2024-07-02T00:44:30.766690664Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:30.771857 env[1826]: time="2024-07-02T00:44:30.771773479Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:30.774187 env[1826]: time="2024-07-02T00:44:30.774133748Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:30.775932 env[1826]: time="2024-07-02T00:44:30.775878423Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:30.778958 env[1826]: time="2024-07-02T00:44:30.778880992Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:30.780479 env[1826]: time="2024-07-02T00:44:30.780410771Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:30.781990 env[1826]: time="2024-07-02T00:44:30.781947896Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:30.787803 env[1826]: time="2024-07-02T00:44:30.787666277Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:30.792454 env[1826]: time="2024-07-02T00:44:30.790596938Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:30.794199 env[1826]: time="2024-07-02T00:44:30.794151667Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:30.800985 env[1826]: time="2024-07-02T00:44:30.800917751Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:30.871576 env[1826]: time="2024-07-02T00:44:30.871470993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:44:30.871841 env[1826]: time="2024-07-02T00:44:30.871785871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:44:30.872007 env[1826]: time="2024-07-02T00:44:30.871960390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:44:30.872540 env[1826]: time="2024-07-02T00:44:30.872475906Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/776a083d19b363e48b6e98d5517d171743b4cfe5f26cbe7c2b467371dd1060df pid=2475 runtime=io.containerd.runc.v2 Jul 2 00:44:30.875120 env[1826]: time="2024-07-02T00:44:30.874976510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:44:30.875369 env[1826]: time="2024-07-02T00:44:30.875057240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:44:30.875369 env[1826]: time="2024-07-02T00:44:30.875128260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:44:30.875676 env[1826]: time="2024-07-02T00:44:30.875563202Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd0698edddd90b145effd3d9bf2e753319ba1e3fb93992872832e31067c1bb59 pid=2474 runtime=io.containerd.runc.v2 Jul 2 00:44:30.891025 env[1826]: time="2024-07-02T00:44:30.890884708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:44:30.891171 env[1826]: time="2024-07-02T00:44:30.891040575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:44:30.891171 env[1826]: time="2024-07-02T00:44:30.891128062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:44:30.891585 env[1826]: time="2024-07-02T00:44:30.891508212Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab88bb4b66e369f98717d437fcd57675b4a27c473582ee4bf88f13d431889295 pid=2495 runtime=io.containerd.runc.v2 Jul 2 00:44:30.905891 systemd[1]: Started cri-containerd-776a083d19b363e48b6e98d5517d171743b4cfe5f26cbe7c2b467371dd1060df.scope. Jul 2 00:44:30.932747 systemd[1]: Started cri-containerd-fd0698edddd90b145effd3d9bf2e753319ba1e3fb93992872832e31067c1bb59.scope. Jul 2 00:44:30.964376 systemd[1]: Started cri-containerd-ab88bb4b66e369f98717d437fcd57675b4a27c473582ee4bf88f13d431889295.scope. Jul 2 00:44:31.051419 env[1826]: time="2024-07-02T00:44:31.051275736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-155,Uid:2d7555ba1848484cfc46bd476cfdc6df,Namespace:kube-system,Attempt:0,} returns sandbox id \"776a083d19b363e48b6e98d5517d171743b4cfe5f26cbe7c2b467371dd1060df\"" Jul 2 00:44:31.060153 env[1826]: time="2024-07-02T00:44:31.060092896Z" level=info msg="CreateContainer within sandbox \"776a083d19b363e48b6e98d5517d171743b4cfe5f26cbe7c2b467371dd1060df\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:44:31.105838 kubelet[2427]: E0702 00:44:31.105749 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-155?timeout=10s\": dial tcp 172.31.27.155:6443: connect: connection refused" interval="1.6s" Jul 2 00:44:31.112309 env[1826]: time="2024-07-02T00:44:31.112088915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-155,Uid:417e82181e446d53d6100ce808504345,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd0698edddd90b145effd3d9bf2e753319ba1e3fb93992872832e31067c1bb59\"" Jul 2 00:44:31.112950 env[1826]: time="2024-07-02T00:44:31.112760902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-155,Uid:a2c912ca61fc1789225157791c98ea13,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab88bb4b66e369f98717d437fcd57675b4a27c473582ee4bf88f13d431889295\"" Jul 2 00:44:31.118353 env[1826]: time="2024-07-02T00:44:31.118295423Z" level=info msg="CreateContainer within sandbox \"fd0698edddd90b145effd3d9bf2e753319ba1e3fb93992872832e31067c1bb59\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:44:31.118864 env[1826]: time="2024-07-02T00:44:31.118802879Z" level=info msg="CreateContainer within sandbox \"ab88bb4b66e369f98717d437fcd57675b4a27c473582ee4bf88f13d431889295\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:44:31.209507 kubelet[2427]: I0702 00:44:31.208989 2427 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-155" Jul 2 00:44:31.209507 kubelet[2427]: E0702 00:44:31.209461 2427 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.155:6443/api/v1/nodes\": dial tcp 172.31.27.155:6443: connect: connection refused" node="ip-172-31-27-155" Jul 2 00:44:31.245359 kubelet[2427]: W0702 00:44:31.245185 2427 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.27.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.155:6443: connect: connection refused Jul 2 00:44:31.245359 kubelet[2427]: E0702 00:44:31.245293 2427 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.27.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.155:6443: connect: connection refused Jul 2 00:44:31.443244 env[1826]: time="2024-07-02T00:44:31.442153727Z" level=info msg="CreateContainer within sandbox \"776a083d19b363e48b6e98d5517d171743b4cfe5f26cbe7c2b467371dd1060df\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4103a390fc8690a2c702f9d0c34f1c2acd9c121d77cc1d3379c42616602d0f7d\"" Jul 2 00:44:31.444848 env[1826]: time="2024-07-02T00:44:31.444799906Z" level=info msg="StartContainer for \"4103a390fc8690a2c702f9d0c34f1c2acd9c121d77cc1d3379c42616602d0f7d\"" Jul 2 00:44:31.471413 env[1826]: time="2024-07-02T00:44:31.471321526Z" level=info msg="CreateContainer within sandbox \"fd0698edddd90b145effd3d9bf2e753319ba1e3fb93992872832e31067c1bb59\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d510704d6b5bfa8fc2e7c3568e59161e536c781be55be64fef3438b254804005\"" Jul 2 00:44:31.472172 env[1826]: time="2024-07-02T00:44:31.472126370Z" level=info msg="StartContainer for \"d510704d6b5bfa8fc2e7c3568e59161e536c781be55be64fef3438b254804005\"" Jul 2 00:44:31.478653 systemd[1]: Started cri-containerd-4103a390fc8690a2c702f9d0c34f1c2acd9c121d77cc1d3379c42616602d0f7d.scope. Jul 2 00:44:31.517167 env[1826]: time="2024-07-02T00:44:31.517074742Z" level=info msg="CreateContainer within sandbox \"ab88bb4b66e369f98717d437fcd57675b4a27c473582ee4bf88f13d431889295\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"97d084ddc5f88d44885bb9605a0c817e52a9a81b2ea659ce0cc0abb355fed510\"" Jul 2 00:44:31.518508 env[1826]: time="2024-07-02T00:44:31.518445181Z" level=info msg="StartContainer for \"97d084ddc5f88d44885bb9605a0c817e52a9a81b2ea659ce0cc0abb355fed510\"" Jul 2 00:44:31.546151 systemd[1]: Started cri-containerd-d510704d6b5bfa8fc2e7c3568e59161e536c781be55be64fef3438b254804005.scope. Jul 2 00:44:31.606773 systemd[1]: Started cri-containerd-97d084ddc5f88d44885bb9605a0c817e52a9a81b2ea659ce0cc0abb355fed510.scope. Jul 2 00:44:31.629573 env[1826]: time="2024-07-02T00:44:31.629493496Z" level=info msg="StartContainer for \"4103a390fc8690a2c702f9d0c34f1c2acd9c121d77cc1d3379c42616602d0f7d\" returns successfully" Jul 2 00:44:31.667544 env[1826]: time="2024-07-02T00:44:31.667456838Z" level=info msg="StartContainer for \"d510704d6b5bfa8fc2e7c3568e59161e536c781be55be64fef3438b254804005\" returns successfully" Jul 2 00:44:31.754614 kubelet[2427]: E0702 00:44:31.754472 2427 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.27.155:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.27.155:6443: connect: connection refused Jul 2 00:44:31.786081 env[1826]: time="2024-07-02T00:44:31.786012941Z" level=info msg="StartContainer for \"97d084ddc5f88d44885bb9605a0c817e52a9a81b2ea659ce0cc0abb355fed510\" returns successfully" Jul 2 00:44:32.811996 kubelet[2427]: I0702 00:44:32.811958 2427 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-155" Jul 2 00:44:34.449814 update_engine[1819]: I0702 00:44:34.449266 1819 update_attempter.cc:509] Updating boot flags... Jul 2 00:44:36.473010 kubelet[2427]: E0702 00:44:36.472966 2427 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-27-155\" not found" node="ip-172-31-27-155" Jul 2 00:44:36.479554 kubelet[2427]: I0702 00:44:36.479490 2427 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-27-155" Jul 2 00:44:36.556650 kubelet[2427]: E0702 00:44:36.556585 2427 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-27-155.17de3eb875054b71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-155,UID:ip-172-31-27-155,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-155,},FirstTimestamp:2024-07-02 00:44:29.660294001 +0000 UTC m=+1.563741124,LastTimestamp:2024-07-02 00:44:29.660294001 +0000 UTC m=+1.563741124,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-155,}" Jul 2 00:44:36.633644 kubelet[2427]: E0702 00:44:36.633606 2427 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-27-155.17de3eb8773bb19a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-155,UID:ip-172-31-27-155,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-27-155,},FirstTimestamp:2024-07-02 00:44:29.69741353 +0000 UTC m=+1.600860629,LastTimestamp:2024-07-02 00:44:29.69741353 +0000 UTC m=+1.600860629,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-155,}" Jul 2 00:44:36.664574 kubelet[2427]: I0702 00:44:36.664480 2427 apiserver.go:52] "Watching apiserver" Jul 2 00:44:36.701281 kubelet[2427]: I0702 00:44:36.701197 2427 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:44:39.293132 systemd[1]: Reloading. Jul 2 00:44:39.468894 /usr/lib/systemd/system-generators/torcx-generator[2823]: time="2024-07-02T00:44:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 00:44:39.468960 /usr/lib/systemd/system-generators/torcx-generator[2823]: time="2024-07-02T00:44:39Z" level=info msg="torcx already run" Jul 2 00:44:39.647525 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:44:39.647832 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:44:39.689275 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:44:39.997139 kubelet[2427]: I0702 00:44:39.997025 2427 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:44:39.998414 systemd[1]: Stopping kubelet.service... Jul 2 00:44:40.017310 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:44:40.017907 systemd[1]: Stopped kubelet.service. Jul 2 00:44:40.018117 systemd[1]: kubelet.service: Consumed 2.381s CPU time. Jul 2 00:44:40.023025 systemd[1]: Starting kubelet.service... Jul 2 00:44:40.409054 systemd[1]: Started kubelet.service. Jul 2 00:44:40.519868 kubelet[2880]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:44:40.519868 kubelet[2880]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:44:40.519868 kubelet[2880]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:44:40.520500 kubelet[2880]: I0702 00:44:40.520153 2880 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:44:40.544477 kubelet[2880]: I0702 00:44:40.544408 2880 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 00:44:40.544477 kubelet[2880]: I0702 00:44:40.544466 2880 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:44:40.544890 kubelet[2880]: I0702 00:44:40.544846 2880 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 00:44:40.551043 kubelet[2880]: I0702 00:44:40.550959 2880 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:44:40.566121 kubelet[2880]: I0702 00:44:40.566046 2880 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:44:40.587996 kubelet[2880]: I0702 00:44:40.587932 2880 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:44:40.590728 sudo[2893]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 00:44:40.591922 kubelet[2880]: I0702 00:44:40.591881 2880 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:44:40.592205 sudo[2893]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 00:44:40.592842 kubelet[2880]: I0702 00:44:40.592790 2880 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:44:40.594398 kubelet[2880]: I0702 00:44:40.594356 2880 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:44:40.599726 kubelet[2880]: I0702 00:44:40.596675 2880 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:44:40.600042 kubelet[2880]: I0702 00:44:40.600007 2880 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:44:40.600487 kubelet[2880]: I0702 00:44:40.600450 2880 kubelet.go:396] "Attempting to sync node with API server" Jul 2 00:44:40.600727 kubelet[2880]: I0702 00:44:40.600698 2880 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:44:40.600879 kubelet[2880]: I0702 00:44:40.600856 2880 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:44:40.601009 kubelet[2880]: I0702 00:44:40.600987 2880 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:44:40.606041 kubelet[2880]: I0702 00:44:40.606001 2880 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 00:44:40.606721 kubelet[2880]: I0702 00:44:40.606685 2880 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:44:40.611167 kubelet[2880]: I0702 00:44:40.611085 2880 server.go:1256] "Started kubelet" Jul 2 00:44:40.628850 kubelet[2880]: I0702 00:44:40.628807 2880 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:44:40.649187 kubelet[2880]: I0702 00:44:40.649136 2880 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:44:40.651126 kubelet[2880]: I0702 00:44:40.651082 2880 server.go:461] "Adding debug handlers to kubelet server" Jul 2 00:44:40.668682 kubelet[2880]: I0702 00:44:40.668531 2880 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:44:40.669619 kubelet[2880]: I0702 00:44:40.669578 2880 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:44:40.685254 kubelet[2880]: I0702 00:44:40.680098 2880 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:44:40.694527 kubelet[2880]: I0702 00:44:40.694480 2880 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:44:40.695532 kubelet[2880]: I0702 00:44:40.695492 2880 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:44:40.726346 kubelet[2880]: E0702 00:44:40.726307 2880 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:44:40.754093 kubelet[2880]: I0702 00:44:40.754054 2880 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:44:40.754324 kubelet[2880]: I0702 00:44:40.754303 2880 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:44:40.754592 kubelet[2880]: I0702 00:44:40.754558 2880 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:44:40.764409 kubelet[2880]: I0702 00:44:40.764367 2880 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:44:40.773547 kubelet[2880]: I0702 00:44:40.773505 2880 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:44:40.773864 kubelet[2880]: I0702 00:44:40.773833 2880 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:44:40.774010 kubelet[2880]: I0702 00:44:40.773989 2880 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 00:44:40.774277 kubelet[2880]: E0702 00:44:40.774185 2880 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:44:40.858054 kubelet[2880]: I0702 00:44:40.858009 2880 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-155" Jul 2 00:44:40.881466 kubelet[2880]: E0702 00:44:40.881282 2880 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:44:40.910736 kubelet[2880]: I0702 00:44:40.910682 2880 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-27-155" Jul 2 00:44:40.910898 kubelet[2880]: I0702 00:44:40.910819 2880 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-27-155" Jul 2 00:44:41.045132 kubelet[2880]: I0702 00:44:41.045093 2880 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:44:41.045376 kubelet[2880]: I0702 00:44:41.045354 2880 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:44:41.045512 kubelet[2880]: I0702 00:44:41.045492 2880 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:44:41.045890 kubelet[2880]: I0702 00:44:41.045868 2880 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:44:41.046044 kubelet[2880]: I0702 00:44:41.046022 2880 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:44:41.047244 kubelet[2880]: I0702 00:44:41.047111 2880 policy_none.go:49] "None policy: Start" Jul 2 00:44:41.051420 kubelet[2880]: I0702 00:44:41.051379 2880 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:44:41.051705 kubelet[2880]: I0702 00:44:41.051684 2880 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:44:41.052247 kubelet[2880]: I0702 00:44:41.052189 2880 state_mem.go:75] "Updated machine memory state" Jul 2 00:44:41.080031 kubelet[2880]: I0702 00:44:41.079992 2880 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:44:41.088591 kubelet[2880]: I0702 00:44:41.088553 2880 topology_manager.go:215] "Topology Admit Handler" podUID="a2c912ca61fc1789225157791c98ea13" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-27-155" Jul 2 00:44:41.097412 kubelet[2880]: I0702 00:44:41.097358 2880 topology_manager.go:215] "Topology Admit Handler" podUID="2d7555ba1848484cfc46bd476cfdc6df" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-27-155" Jul 2 00:44:41.097851 kubelet[2880]: I0702 00:44:41.097820 2880 topology_manager.go:215] "Topology Admit Handler" podUID="417e82181e446d53d6100ce808504345" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-27-155" Jul 2 00:44:41.099043 kubelet[2880]: I0702 00:44:41.090191 2880 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:44:41.110952 kubelet[2880]: I0702 00:44:41.110893 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a2c912ca61fc1789225157791c98ea13-ca-certs\") pod \"kube-apiserver-ip-172-31-27-155\" (UID: \"a2c912ca61fc1789225157791c98ea13\") " pod="kube-system/kube-apiserver-ip-172-31-27-155" Jul 2 00:44:41.111362 kubelet[2880]: I0702 00:44:41.111310 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a2c912ca61fc1789225157791c98ea13-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-155\" (UID: \"a2c912ca61fc1789225157791c98ea13\") " pod="kube-system/kube-apiserver-ip-172-31-27-155" Jul 2 00:44:41.111647 kubelet[2880]: I0702 00:44:41.111618 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a2c912ca61fc1789225157791c98ea13-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-155\" (UID: \"a2c912ca61fc1789225157791c98ea13\") " pod="kube-system/kube-apiserver-ip-172-31-27-155" Jul 2 00:44:41.138351 kubelet[2880]: E0702 00:44:41.138290 2880 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-27-155\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-27-155" Jul 2 00:44:41.212190 kubelet[2880]: I0702 00:44:41.212148 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2d7555ba1848484cfc46bd476cfdc6df-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-155\" (UID: \"2d7555ba1848484cfc46bd476cfdc6df\") " pod="kube-system/kube-controller-manager-ip-172-31-27-155" Jul 2 00:44:41.212469 kubelet[2880]: I0702 00:44:41.212446 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d7555ba1848484cfc46bd476cfdc6df-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-155\" (UID: \"2d7555ba1848484cfc46bd476cfdc6df\") " pod="kube-system/kube-controller-manager-ip-172-31-27-155" Jul 2 00:44:41.212620 kubelet[2880]: I0702 00:44:41.212598 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2d7555ba1848484cfc46bd476cfdc6df-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-155\" (UID: \"2d7555ba1848484cfc46bd476cfdc6df\") " pod="kube-system/kube-controller-manager-ip-172-31-27-155" Jul 2 00:44:41.212782 kubelet[2880]: I0702 00:44:41.212761 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/417e82181e446d53d6100ce808504345-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-155\" (UID: \"417e82181e446d53d6100ce808504345\") " pod="kube-system/kube-scheduler-ip-172-31-27-155" Jul 2 00:44:41.212995 kubelet[2880]: I0702 00:44:41.212956 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2d7555ba1848484cfc46bd476cfdc6df-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-155\" (UID: \"2d7555ba1848484cfc46bd476cfdc6df\") " pod="kube-system/kube-controller-manager-ip-172-31-27-155" Jul 2 00:44:41.213153 kubelet[2880]: I0702 00:44:41.213132 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2d7555ba1848484cfc46bd476cfdc6df-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-155\" (UID: \"2d7555ba1848484cfc46bd476cfdc6df\") " pod="kube-system/kube-controller-manager-ip-172-31-27-155" Jul 2 00:44:41.601734 kubelet[2880]: I0702 00:44:41.601684 2880 apiserver.go:52] "Watching apiserver" Jul 2 00:44:41.695300 kubelet[2880]: I0702 00:44:41.695250 2880 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:44:41.732273 sudo[2893]: pam_unix(sudo:session): session closed for user root Jul 2 00:44:41.963813 kubelet[2880]: I0702 00:44:41.963658 2880 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-27-155" podStartSLOduration=0.963595528 podStartE2EDuration="963.595528ms" podCreationTimestamp="2024-07-02 00:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:44:41.950984876 +0000 UTC m=+1.531024125" watchObservedRunningTime="2024-07-02 00:44:41.963595528 +0000 UTC m=+1.543634753" Jul 2 00:44:41.983190 kubelet[2880]: I0702 00:44:41.983133 2880 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-27-155" podStartSLOduration=0.983081257 podStartE2EDuration="983.081257ms" podCreationTimestamp="2024-07-02 00:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:44:41.965729516 +0000 UTC m=+1.545768753" watchObservedRunningTime="2024-07-02 00:44:41.983081257 +0000 UTC m=+1.563120482" Jul 2 00:44:45.984901 sudo[2077]: pam_unix(sudo:session): session closed for user root Jul 2 00:44:46.010424 sshd[2074]: pam_unix(sshd:session): session closed for user core Jul 2 00:44:46.015528 systemd[1]: sshd@4-172.31.27.155:22-139.178.89.65:36756.service: Deactivated successfully. Jul 2 00:44:46.016756 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:44:46.017055 systemd[1]: session-5.scope: Consumed 12.205s CPU time. Jul 2 00:44:46.018834 systemd-logind[1817]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:44:46.021533 systemd-logind[1817]: Removed session 5. Jul 2 00:44:54.594954 kubelet[2880]: I0702 00:44:54.594906 2880 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:44:54.595863 env[1826]: time="2024-07-02T00:44:54.595756069Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:44:54.596702 kubelet[2880]: I0702 00:44:54.596670 2880 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:44:54.618107 kubelet[2880]: I0702 00:44:54.618063 2880 topology_manager.go:215] "Topology Admit Handler" podUID="32c26b77-1323-4b03-b0db-fb7432bc3e41" podNamespace="kube-system" podName="kube-proxy-cx2cz" Jul 2 00:44:54.629268 systemd[1]: Created slice kubepods-besteffort-pod32c26b77_1323_4b03_b0db_fb7432bc3e41.slice. Jul 2 00:44:54.665251 kubelet[2880]: W0702 00:44:54.665191 2880 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-27-155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-27-155' and this object Jul 2 00:44:54.665520 kubelet[2880]: E0702 00:44:54.665495 2880 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-27-155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-27-155' and this object Jul 2 00:44:54.665701 kubelet[2880]: W0702 00:44:54.665259 2880 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-27-155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-27-155' and this object Jul 2 00:44:54.665862 kubelet[2880]: E0702 00:44:54.665838 2880 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-27-155" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-27-155' and this object Jul 2 00:44:54.685802 kubelet[2880]: I0702 00:44:54.685756 2880 topology_manager.go:215] "Topology Admit Handler" podUID="22e994fd-fdbe-4ff0-9c17-0294b010211e" podNamespace="kube-system" podName="cilium-jbjc5" Jul 2 00:44:54.697353 systemd[1]: Created slice kubepods-burstable-pod22e994fd_fdbe_4ff0_9c17_0294b010211e.slice. Jul 2 00:44:54.705916 kubelet[2880]: I0702 00:44:54.705877 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktmht\" (UniqueName: \"kubernetes.io/projected/32c26b77-1323-4b03-b0db-fb7432bc3e41-kube-api-access-ktmht\") pod \"kube-proxy-cx2cz\" (UID: \"32c26b77-1323-4b03-b0db-fb7432bc3e41\") " pod="kube-system/kube-proxy-cx2cz" Jul 2 00:44:54.706312 kubelet[2880]: I0702 00:44:54.706289 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/32c26b77-1323-4b03-b0db-fb7432bc3e41-kube-proxy\") pod \"kube-proxy-cx2cz\" (UID: \"32c26b77-1323-4b03-b0db-fb7432bc3e41\") " pod="kube-system/kube-proxy-cx2cz" Jul 2 00:44:54.706564 kubelet[2880]: I0702 00:44:54.706540 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32c26b77-1323-4b03-b0db-fb7432bc3e41-xtables-lock\") pod \"kube-proxy-cx2cz\" (UID: \"32c26b77-1323-4b03-b0db-fb7432bc3e41\") " pod="kube-system/kube-proxy-cx2cz" Jul 2 00:44:54.706807 kubelet[2880]: I0702 00:44:54.706708 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32c26b77-1323-4b03-b0db-fb7432bc3e41-lib-modules\") pod \"kube-proxy-cx2cz\" (UID: \"32c26b77-1323-4b03-b0db-fb7432bc3e41\") " pod="kube-system/kube-proxy-cx2cz" Jul 2 00:44:54.809510 kubelet[2880]: I0702 00:44:54.809440 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-host-proc-sys-kernel\") pod \"cilium-jbjc5\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " pod="kube-system/cilium-jbjc5" Jul 2 00:44:54.809714 kubelet[2880]: I0702 00:44:54.809526 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/22e994fd-fdbe-4ff0-9c17-0294b010211e-hubble-tls\") pod \"cilium-jbjc5\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " pod="kube-system/cilium-jbjc5" Jul 2 00:44:54.809714 kubelet[2880]: I0702 00:44:54.809584 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-cilium-run\") pod \"cilium-jbjc5\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " pod="kube-system/cilium-jbjc5" Jul 2 00:44:54.809714 kubelet[2880]: I0702 00:44:54.809637 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-xtables-lock\") pod \"cilium-jbjc5\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " pod="kube-system/cilium-jbjc5" Jul 2 00:44:54.809714 kubelet[2880]: I0702 00:44:54.809685 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/22e994fd-fdbe-4ff0-9c17-0294b010211e-clustermesh-secrets\") pod \"cilium-jbjc5\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " pod="kube-system/cilium-jbjc5" Jul 2 00:44:54.810003 kubelet[2880]: I0702 00:44:54.809762 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-bpf-maps\") pod \"cilium-jbjc5\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " pod="kube-system/cilium-jbjc5" Jul 2 00:44:54.810003 kubelet[2880]: I0702 00:44:54.809815 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-cni-path\") pod \"cilium-jbjc5\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " pod="kube-system/cilium-jbjc5" Jul 2 00:44:54.810003 kubelet[2880]: I0702 00:44:54.809890 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-cilium-cgroup\") pod \"cilium-jbjc5\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " pod="kube-system/cilium-jbjc5" Jul 2 00:44:54.810003 kubelet[2880]: I0702 00:44:54.809938 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-etc-cni-netd\") pod \"cilium-jbjc5\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " pod="kube-system/cilium-jbjc5" Jul 2 00:44:54.810003 kubelet[2880]: I0702 00:44:54.809984 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-lib-modules\") pod \"cilium-jbjc5\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " pod="kube-system/cilium-jbjc5" Jul 2 00:44:54.810422 kubelet[2880]: I0702 00:44:54.810030 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22e994fd-fdbe-4ff0-9c17-0294b010211e-cilium-config-path\") pod \"cilium-jbjc5\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " pod="kube-system/cilium-jbjc5" Jul 2 00:44:54.810422 kubelet[2880]: I0702 00:44:54.810098 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-hostproc\") pod \"cilium-jbjc5\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " pod="kube-system/cilium-jbjc5" Jul 2 00:44:54.810422 kubelet[2880]: I0702 00:44:54.810147 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-host-proc-sys-net\") pod \"cilium-jbjc5\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " pod="kube-system/cilium-jbjc5" Jul 2 00:44:54.810422 kubelet[2880]: I0702 00:44:54.810269 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hqw6\" (UniqueName: \"kubernetes.io/projected/22e994fd-fdbe-4ff0-9c17-0294b010211e-kube-api-access-8hqw6\") pod \"cilium-jbjc5\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " pod="kube-system/cilium-jbjc5" Jul 2 00:44:55.068164 kubelet[2880]: I0702 00:44:55.068110 2880 topology_manager.go:215] "Topology Admit Handler" podUID="b71849e8-2197-4ac8-998e-e69d70edb273" podNamespace="kube-system" podName="cilium-operator-5cc964979-twgjk" Jul 2 00:44:55.100422 systemd[1]: Created slice kubepods-besteffort-podb71849e8_2197_4ac8_998e_e69d70edb273.slice. Jul 2 00:44:55.112736 kubelet[2880]: I0702 00:44:55.112690 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd6h8\" (UniqueName: \"kubernetes.io/projected/b71849e8-2197-4ac8-998e-e69d70edb273-kube-api-access-kd6h8\") pod \"cilium-operator-5cc964979-twgjk\" (UID: \"b71849e8-2197-4ac8-998e-e69d70edb273\") " pod="kube-system/cilium-operator-5cc964979-twgjk" Jul 2 00:44:55.113132 kubelet[2880]: I0702 00:44:55.113101 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b71849e8-2197-4ac8-998e-e69d70edb273-cilium-config-path\") pod \"cilium-operator-5cc964979-twgjk\" (UID: \"b71849e8-2197-4ac8-998e-e69d70edb273\") " pod="kube-system/cilium-operator-5cc964979-twgjk" Jul 2 00:44:55.728826 env[1826]: time="2024-07-02T00:44:55.728754933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-twgjk,Uid:b71849e8-2197-4ac8-998e-e69d70edb273,Namespace:kube-system,Attempt:0,}" Jul 2 00:44:55.762523 env[1826]: time="2024-07-02T00:44:55.762407543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:44:55.762799 env[1826]: time="2024-07-02T00:44:55.762485765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:44:55.762799 env[1826]: time="2024-07-02T00:44:55.762513343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:44:55.763297 env[1826]: time="2024-07-02T00:44:55.762869627Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/40a21263282068f0864e8dcb17e7b3ee043461047dca97010c04bc3afbd74c2f pid=2965 runtime=io.containerd.runc.v2 Jul 2 00:44:55.784884 systemd[1]: Started cri-containerd-40a21263282068f0864e8dcb17e7b3ee043461047dca97010c04bc3afbd74c2f.scope. Jul 2 00:44:55.841000 env[1826]: time="2024-07-02T00:44:55.840929876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cx2cz,Uid:32c26b77-1323-4b03-b0db-fb7432bc3e41,Namespace:kube-system,Attempt:0,}" Jul 2 00:44:55.866180 env[1826]: time="2024-07-02T00:44:55.866096977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-twgjk,Uid:b71849e8-2197-4ac8-998e-e69d70edb273,Namespace:kube-system,Attempt:0,} returns sandbox id \"40a21263282068f0864e8dcb17e7b3ee043461047dca97010c04bc3afbd74c2f\"" Jul 2 00:44:55.873143 env[1826]: time="2024-07-02T00:44:55.870998313Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 00:44:55.885676 env[1826]: time="2024-07-02T00:44:55.885555992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:44:55.885946 env[1826]: time="2024-07-02T00:44:55.885877413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:44:55.886341 env[1826]: time="2024-07-02T00:44:55.886279132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:44:55.886895 env[1826]: time="2024-07-02T00:44:55.886818898Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/80d18529161f1c0ebb79f63c7db1ee8a00d2c9a02f9503387225c527cc431bbf pid=3004 runtime=io.containerd.runc.v2 Jul 2 00:44:55.909116 systemd[1]: Started cri-containerd-80d18529161f1c0ebb79f63c7db1ee8a00d2c9a02f9503387225c527cc431bbf.scope. Jul 2 00:44:55.912107 env[1826]: time="2024-07-02T00:44:55.912020934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jbjc5,Uid:22e994fd-fdbe-4ff0-9c17-0294b010211e,Namespace:kube-system,Attempt:0,}" Jul 2 00:44:55.966528 env[1826]: time="2024-07-02T00:44:55.966409314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:44:55.966787 env[1826]: time="2024-07-02T00:44:55.966736231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:44:55.966988 env[1826]: time="2024-07-02T00:44:55.966939947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:44:55.967786 env[1826]: time="2024-07-02T00:44:55.967700722Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2 pid=3033 runtime=io.containerd.runc.v2 Jul 2 00:44:56.012370 systemd[1]: Started cri-containerd-3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2.scope. Jul 2 00:44:56.026301 systemd[1]: run-containerd-runc-k8s.io-3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2-runc.2SYBQX.mount: Deactivated successfully. Jul 2 00:44:56.041359 env[1826]: time="2024-07-02T00:44:56.041302174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cx2cz,Uid:32c26b77-1323-4b03-b0db-fb7432bc3e41,Namespace:kube-system,Attempt:0,} returns sandbox id \"80d18529161f1c0ebb79f63c7db1ee8a00d2c9a02f9503387225c527cc431bbf\"" Jul 2 00:44:56.051106 env[1826]: time="2024-07-02T00:44:56.051037546Z" level=info msg="CreateContainer within sandbox \"80d18529161f1c0ebb79f63c7db1ee8a00d2c9a02f9503387225c527cc431bbf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:44:56.103040 env[1826]: time="2024-07-02T00:44:56.100835937Z" level=info msg="CreateContainer within sandbox \"80d18529161f1c0ebb79f63c7db1ee8a00d2c9a02f9503387225c527cc431bbf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"208d902759647adacd09f19c66428bdaf739550462bdd7b27ba8d626171536e4\"" Jul 2 00:44:56.104799 env[1826]: time="2024-07-02T00:44:56.103726030Z" level=info msg="StartContainer for \"208d902759647adacd09f19c66428bdaf739550462bdd7b27ba8d626171536e4\"" Jul 2 00:44:56.113799 env[1826]: time="2024-07-02T00:44:56.113690583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jbjc5,Uid:22e994fd-fdbe-4ff0-9c17-0294b010211e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2\"" Jul 2 00:44:56.145927 systemd[1]: Started cri-containerd-208d902759647adacd09f19c66428bdaf739550462bdd7b27ba8d626171536e4.scope. Jul 2 00:44:56.230536 env[1826]: time="2024-07-02T00:44:56.230468408Z" level=info msg="StartContainer for \"208d902759647adacd09f19c66428bdaf739550462bdd7b27ba8d626171536e4\" returns successfully" Jul 2 00:44:56.982502 kubelet[2880]: I0702 00:44:56.982053 2880 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-cx2cz" podStartSLOduration=2.981995865 podStartE2EDuration="2.981995865s" podCreationTimestamp="2024-07-02 00:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:44:56.981681022 +0000 UTC m=+16.561720259" watchObservedRunningTime="2024-07-02 00:44:56.981995865 +0000 UTC m=+16.562035090" Jul 2 00:44:57.155650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2865364453.mount: Deactivated successfully. Jul 2 00:44:58.167464 env[1826]: time="2024-07-02T00:44:58.167377897Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:58.171183 env[1826]: time="2024-07-02T00:44:58.171089581Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:58.174494 env[1826]: time="2024-07-02T00:44:58.174403520Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:58.176081 env[1826]: time="2024-07-02T00:44:58.175986529Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 2 00:44:58.177441 env[1826]: time="2024-07-02T00:44:58.177380908Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 00:44:58.185595 env[1826]: time="2024-07-02T00:44:58.185524602Z" level=info msg="CreateContainer within sandbox \"40a21263282068f0864e8dcb17e7b3ee043461047dca97010c04bc3afbd74c2f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 00:44:58.216383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2537091449.mount: Deactivated successfully. Jul 2 00:44:58.236853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2967510766.mount: Deactivated successfully. Jul 2 00:44:58.240668 env[1826]: time="2024-07-02T00:44:58.240556790Z" level=info msg="CreateContainer within sandbox \"40a21263282068f0864e8dcb17e7b3ee043461047dca97010c04bc3afbd74c2f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7\"" Jul 2 00:44:58.244100 env[1826]: time="2024-07-02T00:44:58.243279088Z" level=info msg="StartContainer for \"12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7\"" Jul 2 00:44:58.285109 systemd[1]: Started cri-containerd-12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7.scope. Jul 2 00:44:58.364904 env[1826]: time="2024-07-02T00:44:58.364837376Z" level=info msg="StartContainer for \"12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7\" returns successfully" Jul 2 00:45:05.603541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2361633204.mount: Deactivated successfully. Jul 2 00:45:09.829372 env[1826]: time="2024-07-02T00:45:09.829301176Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:45:09.832780 env[1826]: time="2024-07-02T00:45:09.832688257Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:45:09.836146 env[1826]: time="2024-07-02T00:45:09.836085454Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:45:09.837488 env[1826]: time="2024-07-02T00:45:09.837431881Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 2 00:45:09.843469 env[1826]: time="2024-07-02T00:45:09.843405394Z" level=info msg="CreateContainer within sandbox \"3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:45:09.863855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3463150067.mount: Deactivated successfully. Jul 2 00:45:09.879419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3844025883.mount: Deactivated successfully. Jul 2 00:45:09.885495 env[1826]: time="2024-07-02T00:45:09.885411959Z" level=info msg="CreateContainer within sandbox \"3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0911de6592a5c6e6735ef746ad62da4a001d44c7eb80279a7f79a84300119557\"" Jul 2 00:45:09.887859 env[1826]: time="2024-07-02T00:45:09.886803521Z" level=info msg="StartContainer for \"0911de6592a5c6e6735ef746ad62da4a001d44c7eb80279a7f79a84300119557\"" Jul 2 00:45:09.931024 systemd[1]: Started cri-containerd-0911de6592a5c6e6735ef746ad62da4a001d44c7eb80279a7f79a84300119557.scope. Jul 2 00:45:09.996953 env[1826]: time="2024-07-02T00:45:09.996875731Z" level=info msg="StartContainer for \"0911de6592a5c6e6735ef746ad62da4a001d44c7eb80279a7f79a84300119557\" returns successfully" Jul 2 00:45:10.021054 systemd[1]: cri-containerd-0911de6592a5c6e6735ef746ad62da4a001d44c7eb80279a7f79a84300119557.scope: Deactivated successfully. Jul 2 00:45:10.045528 kubelet[2880]: I0702 00:45:10.045259 2880 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-twgjk" podStartSLOduration=12.73774755 podStartE2EDuration="15.045160993s" podCreationTimestamp="2024-07-02 00:44:55 +0000 UTC" firstStartedPulling="2024-07-02 00:44:55.869293101 +0000 UTC m=+15.449332326" lastFinishedPulling="2024-07-02 00:44:58.176706472 +0000 UTC m=+17.756745769" observedRunningTime="2024-07-02 00:44:59.063819411 +0000 UTC m=+18.643858660" watchObservedRunningTime="2024-07-02 00:45:10.045160993 +0000 UTC m=+29.625200218" Jul 2 00:45:10.680985 env[1826]: time="2024-07-02T00:45:10.680593636Z" level=info msg="shim disconnected" id=0911de6592a5c6e6735ef746ad62da4a001d44c7eb80279a7f79a84300119557 Jul 2 00:45:10.680985 env[1826]: time="2024-07-02T00:45:10.680692761Z" level=warning msg="cleaning up after shim disconnected" id=0911de6592a5c6e6735ef746ad62da4a001d44c7eb80279a7f79a84300119557 namespace=k8s.io Jul 2 00:45:10.680985 env[1826]: time="2024-07-02T00:45:10.680714530Z" level=info msg="cleaning up dead shim" Jul 2 00:45:10.696177 env[1826]: time="2024-07-02T00:45:10.696119391Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:45:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3322 runtime=io.containerd.runc.v2\n" Jul 2 00:45:10.857992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0911de6592a5c6e6735ef746ad62da4a001d44c7eb80279a7f79a84300119557-rootfs.mount: Deactivated successfully. Jul 2 00:45:11.020662 env[1826]: time="2024-07-02T00:45:11.017390569Z" level=info msg="CreateContainer within sandbox \"3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:45:11.054680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3418719619.mount: Deactivated successfully. Jul 2 00:45:11.070305 env[1826]: time="2024-07-02T00:45:11.070165875Z" level=info msg="CreateContainer within sandbox \"3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"01180be8718645ff9004e172602a715f57f0dffe592b696e9cdac6e237ae7a8b\"" Jul 2 00:45:11.073155 env[1826]: time="2024-07-02T00:45:11.071352679Z" level=info msg="StartContainer for \"01180be8718645ff9004e172602a715f57f0dffe592b696e9cdac6e237ae7a8b\"" Jul 2 00:45:11.110053 systemd[1]: Started cri-containerd-01180be8718645ff9004e172602a715f57f0dffe592b696e9cdac6e237ae7a8b.scope. Jul 2 00:45:11.187289 env[1826]: time="2024-07-02T00:45:11.187187524Z" level=info msg="StartContainer for \"01180be8718645ff9004e172602a715f57f0dffe592b696e9cdac6e237ae7a8b\" returns successfully" Jul 2 00:45:11.212132 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:45:11.213042 systemd[1]: Stopped systemd-sysctl.service. Jul 2 00:45:11.214194 systemd[1]: Stopping systemd-sysctl.service... Jul 2 00:45:11.221812 systemd[1]: Starting systemd-sysctl.service... Jul 2 00:45:11.230557 systemd[1]: cri-containerd-01180be8718645ff9004e172602a715f57f0dffe592b696e9cdac6e237ae7a8b.scope: Deactivated successfully. Jul 2 00:45:11.246525 systemd[1]: Finished systemd-sysctl.service. Jul 2 00:45:11.307498 env[1826]: time="2024-07-02T00:45:11.306981610Z" level=info msg="shim disconnected" id=01180be8718645ff9004e172602a715f57f0dffe592b696e9cdac6e237ae7a8b Jul 2 00:45:11.307498 env[1826]: time="2024-07-02T00:45:11.307159387Z" level=warning msg="cleaning up after shim disconnected" id=01180be8718645ff9004e172602a715f57f0dffe592b696e9cdac6e237ae7a8b namespace=k8s.io Jul 2 00:45:11.307498 env[1826]: time="2024-07-02T00:45:11.307185885Z" level=info msg="cleaning up dead shim" Jul 2 00:45:11.322489 env[1826]: time="2024-07-02T00:45:11.322428075Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:45:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3384 runtime=io.containerd.runc.v2\n" Jul 2 00:45:12.028964 env[1826]: time="2024-07-02T00:45:12.028495019Z" level=info msg="CreateContainer within sandbox \"3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:45:12.061572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4149117241.mount: Deactivated successfully. Jul 2 00:45:12.075738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1861428820.mount: Deactivated successfully. Jul 2 00:45:12.095989 env[1826]: time="2024-07-02T00:45:12.095914289Z" level=info msg="CreateContainer within sandbox \"3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"08d2dd4876c9898c78b9ea53d92022d472516e1fc6fb1f1f1c799ab94e8de139\"" Jul 2 00:45:12.097505 env[1826]: time="2024-07-02T00:45:12.097436413Z" level=info msg="StartContainer for \"08d2dd4876c9898c78b9ea53d92022d472516e1fc6fb1f1f1c799ab94e8de139\"" Jul 2 00:45:12.136020 systemd[1]: Started cri-containerd-08d2dd4876c9898c78b9ea53d92022d472516e1fc6fb1f1f1c799ab94e8de139.scope. Jul 2 00:45:12.222284 systemd[1]: cri-containerd-08d2dd4876c9898c78b9ea53d92022d472516e1fc6fb1f1f1c799ab94e8de139.scope: Deactivated successfully. Jul 2 00:45:12.237265 env[1826]: time="2024-07-02T00:45:12.237178198Z" level=info msg="StartContainer for \"08d2dd4876c9898c78b9ea53d92022d472516e1fc6fb1f1f1c799ab94e8de139\" returns successfully" Jul 2 00:45:12.286772 env[1826]: time="2024-07-02T00:45:12.286605368Z" level=info msg="shim disconnected" id=08d2dd4876c9898c78b9ea53d92022d472516e1fc6fb1f1f1c799ab94e8de139 Jul 2 00:45:12.287292 env[1826]: time="2024-07-02T00:45:12.287245938Z" level=warning msg="cleaning up after shim disconnected" id=08d2dd4876c9898c78b9ea53d92022d472516e1fc6fb1f1f1c799ab94e8de139 namespace=k8s.io Jul 2 00:45:12.287472 env[1826]: time="2024-07-02T00:45:12.287439748Z" level=info msg="cleaning up dead shim" Jul 2 00:45:12.304096 env[1826]: time="2024-07-02T00:45:12.304035574Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:45:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3442 runtime=io.containerd.runc.v2\n" Jul 2 00:45:13.034160 env[1826]: time="2024-07-02T00:45:13.033631136Z" level=info msg="CreateContainer within sandbox \"3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:45:13.059937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2786740127.mount: Deactivated successfully. Jul 2 00:45:13.078547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2051288184.mount: Deactivated successfully. Jul 2 00:45:13.081197 env[1826]: time="2024-07-02T00:45:13.081035419Z" level=info msg="CreateContainer within sandbox \"3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"139872947d6d4c3cd1a95cf7f2424ca3ea4d49dcc8ad47e13f68659aaec9c2e5\"" Jul 2 00:45:13.083507 env[1826]: time="2024-07-02T00:45:13.083397610Z" level=info msg="StartContainer for \"139872947d6d4c3cd1a95cf7f2424ca3ea4d49dcc8ad47e13f68659aaec9c2e5\"" Jul 2 00:45:13.114282 systemd[1]: Started cri-containerd-139872947d6d4c3cd1a95cf7f2424ca3ea4d49dcc8ad47e13f68659aaec9c2e5.scope. Jul 2 00:45:13.187674 systemd[1]: cri-containerd-139872947d6d4c3cd1a95cf7f2424ca3ea4d49dcc8ad47e13f68659aaec9c2e5.scope: Deactivated successfully. Jul 2 00:45:13.191728 env[1826]: time="2024-07-02T00:45:13.191416539Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22e994fd_fdbe_4ff0_9c17_0294b010211e.slice/cri-containerd-139872947d6d4c3cd1a95cf7f2424ca3ea4d49dcc8ad47e13f68659aaec9c2e5.scope/memory.events\": no such file or directory" Jul 2 00:45:13.195508 env[1826]: time="2024-07-02T00:45:13.195424473Z" level=info msg="StartContainer for \"139872947d6d4c3cd1a95cf7f2424ca3ea4d49dcc8ad47e13f68659aaec9c2e5\" returns successfully" Jul 2 00:45:13.248908 env[1826]: time="2024-07-02T00:45:13.248832802Z" level=info msg="shim disconnected" id=139872947d6d4c3cd1a95cf7f2424ca3ea4d49dcc8ad47e13f68659aaec9c2e5 Jul 2 00:45:13.249359 env[1826]: time="2024-07-02T00:45:13.249323999Z" level=warning msg="cleaning up after shim disconnected" id=139872947d6d4c3cd1a95cf7f2424ca3ea4d49dcc8ad47e13f68659aaec9c2e5 namespace=k8s.io Jul 2 00:45:13.249528 env[1826]: time="2024-07-02T00:45:13.249500361Z" level=info msg="cleaning up dead shim" Jul 2 00:45:13.265315 env[1826]: time="2024-07-02T00:45:13.265205247Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:45:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3498 runtime=io.containerd.runc.v2\n" Jul 2 00:45:14.037348 env[1826]: time="2024-07-02T00:45:14.037265424Z" level=info msg="CreateContainer within sandbox \"3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:45:14.091687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4052468691.mount: Deactivated successfully. Jul 2 00:45:14.100288 env[1826]: time="2024-07-02T00:45:14.100192991Z" level=info msg="CreateContainer within sandbox \"3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd\"" Jul 2 00:45:14.101656 env[1826]: time="2024-07-02T00:45:14.101602367Z" level=info msg="StartContainer for \"978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd\"" Jul 2 00:45:14.132782 systemd[1]: Started cri-containerd-978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd.scope. Jul 2 00:45:14.225659 env[1826]: time="2024-07-02T00:45:14.225544938Z" level=info msg="StartContainer for \"978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd\" returns successfully" Jul 2 00:45:14.457311 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 2 00:45:14.479967 kubelet[2880]: I0702 00:45:14.479911 2880 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 00:45:14.521617 kubelet[2880]: I0702 00:45:14.521566 2880 topology_manager.go:215] "Topology Admit Handler" podUID="b281c72c-790a-4fd6-8dd4-323e5d1efb6a" podNamespace="kube-system" podName="coredns-76f75df574-5gbxw" Jul 2 00:45:14.532424 kubelet[2880]: I0702 00:45:14.532367 2880 topology_manager.go:215] "Topology Admit Handler" podUID="d2c99726-a471-4d80-95b5-fa84d890b0cc" podNamespace="kube-system" podName="coredns-76f75df574-nsvgx" Jul 2 00:45:14.538246 systemd[1]: Created slice kubepods-burstable-podb281c72c_790a_4fd6_8dd4_323e5d1efb6a.slice. Jul 2 00:45:14.555542 systemd[1]: Created slice kubepods-burstable-podd2c99726_a471_4d80_95b5_fa84d890b0cc.slice. Jul 2 00:45:14.567494 kubelet[2880]: I0702 00:45:14.567445 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b281c72c-790a-4fd6-8dd4-323e5d1efb6a-config-volume\") pod \"coredns-76f75df574-5gbxw\" (UID: \"b281c72c-790a-4fd6-8dd4-323e5d1efb6a\") " pod="kube-system/coredns-76f75df574-5gbxw" Jul 2 00:45:14.567783 kubelet[2880]: I0702 00:45:14.567759 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2c99726-a471-4d80-95b5-fa84d890b0cc-config-volume\") pod \"coredns-76f75df574-nsvgx\" (UID: \"d2c99726-a471-4d80-95b5-fa84d890b0cc\") " pod="kube-system/coredns-76f75df574-nsvgx" Jul 2 00:45:14.568054 kubelet[2880]: I0702 00:45:14.567983 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwdgn\" (UniqueName: \"kubernetes.io/projected/b281c72c-790a-4fd6-8dd4-323e5d1efb6a-kube-api-access-vwdgn\") pod \"coredns-76f75df574-5gbxw\" (UID: \"b281c72c-790a-4fd6-8dd4-323e5d1efb6a\") " pod="kube-system/coredns-76f75df574-5gbxw" Jul 2 00:45:14.568429 kubelet[2880]: I0702 00:45:14.568399 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrg7q\" (UniqueName: \"kubernetes.io/projected/d2c99726-a471-4d80-95b5-fa84d890b0cc-kube-api-access-wrg7q\") pod \"coredns-76f75df574-nsvgx\" (UID: \"d2c99726-a471-4d80-95b5-fa84d890b0cc\") " pod="kube-system/coredns-76f75df574-nsvgx" Jul 2 00:45:14.849686 env[1826]: time="2024-07-02T00:45:14.848795763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5gbxw,Uid:b281c72c-790a-4fd6-8dd4-323e5d1efb6a,Namespace:kube-system,Attempt:0,}" Jul 2 00:45:14.865519 env[1826]: time="2024-07-02T00:45:14.865385816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nsvgx,Uid:d2c99726-a471-4d80-95b5-fa84d890b0cc,Namespace:kube-system,Attempt:0,}" Jul 2 00:45:15.367276 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 2 00:45:17.182162 (udev-worker)[3663]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:45:17.184863 systemd-networkd[1540]: cilium_host: Link UP Jul 2 00:45:17.187539 (udev-worker)[3664]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:45:17.194001 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 2 00:45:17.194136 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 00:45:17.189547 systemd-networkd[1540]: cilium_net: Link UP Jul 2 00:45:17.192028 systemd-networkd[1540]: cilium_net: Gained carrier Jul 2 00:45:17.194143 systemd-networkd[1540]: cilium_host: Gained carrier Jul 2 00:45:17.385942 systemd-networkd[1540]: cilium_vxlan: Link UP Jul 2 00:45:17.385964 systemd-networkd[1540]: cilium_vxlan: Gained carrier Jul 2 00:45:17.421539 systemd-networkd[1540]: cilium_host: Gained IPv6LL Jul 2 00:45:17.829714 systemd-networkd[1540]: cilium_net: Gained IPv6LL Jul 2 00:45:17.898248 kernel: NET: Registered PF_ALG protocol family Jul 2 00:45:18.469889 systemd-networkd[1540]: cilium_vxlan: Gained IPv6LL Jul 2 00:45:19.303204 systemd-networkd[1540]: lxc_health: Link UP Jul 2 00:45:19.325249 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 00:45:19.327527 systemd-networkd[1540]: lxc_health: Gained carrier Jul 2 00:45:19.946339 kubelet[2880]: I0702 00:45:19.946268 2880 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jbjc5" podStartSLOduration=12.22457494 podStartE2EDuration="25.946183619s" podCreationTimestamp="2024-07-02 00:44:54 +0000 UTC" firstStartedPulling="2024-07-02 00:44:56.116395574 +0000 UTC m=+15.696434811" lastFinishedPulling="2024-07-02 00:45:09.838004277 +0000 UTC m=+29.418043490" observedRunningTime="2024-07-02 00:45:15.071170275 +0000 UTC m=+34.651209512" watchObservedRunningTime="2024-07-02 00:45:19.946183619 +0000 UTC m=+39.526222856" Jul 2 00:45:20.012402 systemd-networkd[1540]: lxca19697ef8086: Link UP Jul 2 00:45:20.029275 kernel: eth0: renamed from tmp3c783 Jul 2 00:45:20.038301 systemd-networkd[1540]: lxcf82b9cedfa5b: Link UP Jul 2 00:45:20.045267 kernel: eth0: renamed from tmpbd756 Jul 2 00:45:20.051430 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca19697ef8086: link becomes ready Jul 2 00:45:20.051347 systemd-networkd[1540]: lxca19697ef8086: Gained carrier Jul 2 00:45:20.052156 (udev-worker)[3678]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:45:20.059517 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf82b9cedfa5b: link becomes ready Jul 2 00:45:20.058674 systemd-networkd[1540]: lxcf82b9cedfa5b: Gained carrier Jul 2 00:45:21.094054 systemd-networkd[1540]: lxca19697ef8086: Gained IPv6LL Jul 2 00:45:21.158021 systemd-networkd[1540]: lxc_health: Gained IPv6LL Jul 2 00:45:21.862844 systemd-networkd[1540]: lxcf82b9cedfa5b: Gained IPv6LL Jul 2 00:45:27.191201 systemd[1]: Started sshd@5-172.31.27.155:22-139.178.89.65:40112.service. Jul 2 00:45:27.382344 sshd[4032]: Accepted publickey for core from 139.178.89.65 port 40112 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:45:27.384541 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:27.393786 systemd-logind[1817]: New session 6 of user core. Jul 2 00:45:27.395125 systemd[1]: Started session-6.scope. Jul 2 00:45:27.723630 sshd[4032]: pam_unix(sshd:session): session closed for user core Jul 2 00:45:27.728939 systemd[1]: sshd@5-172.31.27.155:22-139.178.89.65:40112.service: Deactivated successfully. Jul 2 00:45:27.730980 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:45:27.732949 systemd-logind[1817]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:45:27.735923 systemd-logind[1817]: Removed session 6. Jul 2 00:45:28.811527 env[1826]: time="2024-07-02T00:45:28.811376607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:45:28.811527 env[1826]: time="2024-07-02T00:45:28.811465927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:45:28.812452 env[1826]: time="2024-07-02T00:45:28.811496456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:45:28.813128 env[1826]: time="2024-07-02T00:45:28.812936583Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c783e8c8e3cbabeb488341c79c2e42453a13bfc37c89589717503b598205173 pid=4057 runtime=io.containerd.runc.v2 Jul 2 00:45:28.860144 systemd[1]: run-containerd-runc-k8s.io-3c783e8c8e3cbabeb488341c79c2e42453a13bfc37c89589717503b598205173-runc.bolIwB.mount: Deactivated successfully. Jul 2 00:45:28.870519 systemd[1]: Started cri-containerd-3c783e8c8e3cbabeb488341c79c2e42453a13bfc37c89589717503b598205173.scope. Jul 2 00:45:28.919810 env[1826]: time="2024-07-02T00:45:28.919658477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:45:28.919987 env[1826]: time="2024-07-02T00:45:28.919832003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:45:28.919987 env[1826]: time="2024-07-02T00:45:28.919921635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:45:28.920641 env[1826]: time="2024-07-02T00:45:28.920538782Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd756f78ccd366a03d88f6f51afecca28435422d21f4e007ad1efb8b1f9fb0a9 pid=4091 runtime=io.containerd.runc.v2 Jul 2 00:45:28.970627 systemd[1]: Started cri-containerd-bd756f78ccd366a03d88f6f51afecca28435422d21f4e007ad1efb8b1f9fb0a9.scope. Jul 2 00:45:29.034606 env[1826]: time="2024-07-02T00:45:29.034535716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5gbxw,Uid:b281c72c-790a-4fd6-8dd4-323e5d1efb6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c783e8c8e3cbabeb488341c79c2e42453a13bfc37c89589717503b598205173\"" Jul 2 00:45:29.040947 env[1826]: time="2024-07-02T00:45:29.040869738Z" level=info msg="CreateContainer within sandbox \"3c783e8c8e3cbabeb488341c79c2e42453a13bfc37c89589717503b598205173\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:45:29.077161 env[1826]: time="2024-07-02T00:45:29.076984662Z" level=info msg="CreateContainer within sandbox \"3c783e8c8e3cbabeb488341c79c2e42453a13bfc37c89589717503b598205173\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f0bedf817d48cf38188652f3c3d746b02c416d848c449c9f0a3651dfa6604c4\"" Jul 2 00:45:29.078188 env[1826]: time="2024-07-02T00:45:29.078127586Z" level=info msg="StartContainer for \"2f0bedf817d48cf38188652f3c3d746b02c416d848c449c9f0a3651dfa6604c4\"" Jul 2 00:45:29.143280 env[1826]: time="2024-07-02T00:45:29.143199784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nsvgx,Uid:d2c99726-a471-4d80-95b5-fa84d890b0cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd756f78ccd366a03d88f6f51afecca28435422d21f4e007ad1efb8b1f9fb0a9\"" Jul 2 00:45:29.159335 systemd[1]: Started cri-containerd-2f0bedf817d48cf38188652f3c3d746b02c416d848c449c9f0a3651dfa6604c4.scope. Jul 2 00:45:29.161081 env[1826]: time="2024-07-02T00:45:29.161021809Z" level=info msg="CreateContainer within sandbox \"bd756f78ccd366a03d88f6f51afecca28435422d21f4e007ad1efb8b1f9fb0a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:45:29.195706 env[1826]: time="2024-07-02T00:45:29.195588518Z" level=info msg="CreateContainer within sandbox \"bd756f78ccd366a03d88f6f51afecca28435422d21f4e007ad1efb8b1f9fb0a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"79aef4175116ff2be4ea24b2bbb28495af6400f7f637aacb5f9794f5e30f7f09\"" Jul 2 00:45:29.197256 env[1826]: time="2024-07-02T00:45:29.197178447Z" level=info msg="StartContainer for \"79aef4175116ff2be4ea24b2bbb28495af6400f7f637aacb5f9794f5e30f7f09\"" Jul 2 00:45:29.254604 systemd[1]: Started cri-containerd-79aef4175116ff2be4ea24b2bbb28495af6400f7f637aacb5f9794f5e30f7f09.scope. Jul 2 00:45:29.308943 env[1826]: time="2024-07-02T00:45:29.308857570Z" level=info msg="StartContainer for \"2f0bedf817d48cf38188652f3c3d746b02c416d848c449c9f0a3651dfa6604c4\" returns successfully" Jul 2 00:45:29.398484 env[1826]: time="2024-07-02T00:45:29.398309276Z" level=info msg="StartContainer for \"79aef4175116ff2be4ea24b2bbb28495af6400f7f637aacb5f9794f5e30f7f09\" returns successfully" Jul 2 00:45:29.822940 systemd[1]: run-containerd-runc-k8s.io-bd756f78ccd366a03d88f6f51afecca28435422d21f4e007ad1efb8b1f9fb0a9-runc.D2LK93.mount: Deactivated successfully. Jul 2 00:45:30.121858 kubelet[2880]: I0702 00:45:30.121709 2880 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-5gbxw" podStartSLOduration=35.12163306 podStartE2EDuration="35.12163306s" podCreationTimestamp="2024-07-02 00:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:45:30.121263398 +0000 UTC m=+49.701302647" watchObservedRunningTime="2024-07-02 00:45:30.12163306 +0000 UTC m=+49.701672285" Jul 2 00:45:32.755917 systemd[1]: Started sshd@6-172.31.27.155:22-139.178.89.65:36550.service. Jul 2 00:45:32.924926 sshd[4217]: Accepted publickey for core from 139.178.89.65 port 36550 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:45:32.928257 sshd[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:32.937199 systemd-logind[1817]: New session 7 of user core. Jul 2 00:45:32.937910 systemd[1]: Started session-7.scope. Jul 2 00:45:33.201839 sshd[4217]: pam_unix(sshd:session): session closed for user core Jul 2 00:45:33.208430 systemd-logind[1817]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:45:33.208977 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:45:33.211066 systemd-logind[1817]: Removed session 7. Jul 2 00:45:33.212131 systemd[1]: sshd@6-172.31.27.155:22-139.178.89.65:36550.service: Deactivated successfully. Jul 2 00:45:38.232183 systemd[1]: Started sshd@7-172.31.27.155:22-139.178.89.65:56638.service. Jul 2 00:45:38.400961 sshd[4229]: Accepted publickey for core from 139.178.89.65 port 56638 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:45:38.404363 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:38.414080 systemd[1]: Started session-8.scope. Jul 2 00:45:38.415353 systemd-logind[1817]: New session 8 of user core. Jul 2 00:45:38.670854 sshd[4229]: pam_unix(sshd:session): session closed for user core Jul 2 00:45:38.676265 systemd-logind[1817]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:45:38.676670 systemd[1]: sshd@7-172.31.27.155:22-139.178.89.65:56638.service: Deactivated successfully. Jul 2 00:45:38.677989 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:45:38.679907 systemd-logind[1817]: Removed session 8. Jul 2 00:45:43.700699 systemd[1]: Started sshd@8-172.31.27.155:22-139.178.89.65:56654.service. Jul 2 00:45:43.872258 sshd[4243]: Accepted publickey for core from 139.178.89.65 port 56654 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:45:43.874910 sshd[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:43.883700 systemd-logind[1817]: New session 9 of user core. Jul 2 00:45:43.885737 systemd[1]: Started session-9.scope. Jul 2 00:45:44.143725 sshd[4243]: pam_unix(sshd:session): session closed for user core Jul 2 00:45:44.149350 systemd-logind[1817]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:45:44.149736 systemd[1]: sshd@8-172.31.27.155:22-139.178.89.65:56654.service: Deactivated successfully. Jul 2 00:45:44.151061 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:45:44.154753 systemd-logind[1817]: Removed session 9. Jul 2 00:45:49.174625 systemd[1]: Started sshd@9-172.31.27.155:22-139.178.89.65:56164.service. Jul 2 00:45:49.346906 sshd[4255]: Accepted publickey for core from 139.178.89.65 port 56164 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:45:49.349723 sshd[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:49.359385 systemd[1]: Started session-10.scope. Jul 2 00:45:49.359587 systemd-logind[1817]: New session 10 of user core. Jul 2 00:45:49.624959 sshd[4255]: pam_unix(sshd:session): session closed for user core Jul 2 00:45:49.631102 systemd[1]: sshd@9-172.31.27.155:22-139.178.89.65:56164.service: Deactivated successfully. Jul 2 00:45:49.632418 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:45:49.633431 systemd-logind[1817]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:45:49.635291 systemd-logind[1817]: Removed session 10. Jul 2 00:45:49.653175 systemd[1]: Started sshd@10-172.31.27.155:22-139.178.89.65:56178.service. Jul 2 00:45:49.827357 sshd[4268]: Accepted publickey for core from 139.178.89.65 port 56178 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:45:49.830066 sshd[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:49.839490 systemd[1]: Started session-11.scope. Jul 2 00:45:49.840572 systemd-logind[1817]: New session 11 of user core. Jul 2 00:45:50.165046 sshd[4268]: pam_unix(sshd:session): session closed for user core Jul 2 00:45:50.173772 systemd[1]: sshd@10-172.31.27.155:22-139.178.89.65:56178.service: Deactivated successfully. Jul 2 00:45:50.175455 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:45:50.177791 systemd-logind[1817]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:45:50.181964 systemd-logind[1817]: Removed session 11. Jul 2 00:45:50.196344 systemd[1]: Started sshd@11-172.31.27.155:22-139.178.89.65:56182.service. Jul 2 00:45:50.388904 sshd[4279]: Accepted publickey for core from 139.178.89.65 port 56182 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:45:50.392307 sshd[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:50.401186 systemd[1]: Started session-12.scope. Jul 2 00:45:50.402595 systemd-logind[1817]: New session 12 of user core. Jul 2 00:45:50.667779 sshd[4279]: pam_unix(sshd:session): session closed for user core Jul 2 00:45:50.673236 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:45:50.673496 systemd-logind[1817]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:45:50.674827 systemd[1]: sshd@11-172.31.27.155:22-139.178.89.65:56182.service: Deactivated successfully. Jul 2 00:45:50.677453 systemd-logind[1817]: Removed session 12. Jul 2 00:45:51.117015 amazon-ssm-agent[1801]: 2024-07-02 00:45:51 INFO [HealthCheck] HealthCheck reporting agent health. Jul 2 00:45:55.695306 systemd[1]: Started sshd@12-172.31.27.155:22-139.178.89.65:56188.service. Jul 2 00:45:55.867124 sshd[4292]: Accepted publickey for core from 139.178.89.65 port 56188 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:45:55.870869 sshd[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:55.879846 systemd[1]: Started session-13.scope. Jul 2 00:45:55.880931 systemd-logind[1817]: New session 13 of user core. Jul 2 00:45:56.131974 sshd[4292]: pam_unix(sshd:session): session closed for user core Jul 2 00:45:56.138411 systemd-logind[1817]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:45:56.138855 systemd[1]: sshd@12-172.31.27.155:22-139.178.89.65:56188.service: Deactivated successfully. Jul 2 00:45:56.140154 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:45:56.142435 systemd-logind[1817]: Removed session 13. Jul 2 00:46:01.162526 systemd[1]: Started sshd@13-172.31.27.155:22-139.178.89.65:46332.service. Jul 2 00:46:01.338281 sshd[4307]: Accepted publickey for core from 139.178.89.65 port 46332 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:01.341105 sshd[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:01.352506 systemd-logind[1817]: New session 14 of user core. Jul 2 00:46:01.353664 systemd[1]: Started session-14.scope. Jul 2 00:46:01.639914 sshd[4307]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:01.645491 systemd-logind[1817]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:46:01.645930 systemd[1]: sshd@13-172.31.27.155:22-139.178.89.65:46332.service: Deactivated successfully. Jul 2 00:46:01.647310 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:46:01.648857 systemd-logind[1817]: Removed session 14. Jul 2 00:46:06.669042 systemd[1]: Started sshd@14-172.31.27.155:22-139.178.89.65:46346.service. Jul 2 00:46:06.837958 sshd[4319]: Accepted publickey for core from 139.178.89.65 port 46346 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:06.840805 sshd[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:06.851277 systemd[1]: Started session-15.scope. Jul 2 00:46:06.852505 systemd-logind[1817]: New session 15 of user core. Jul 2 00:46:07.102844 sshd[4319]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:07.108501 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:46:07.109800 systemd[1]: sshd@14-172.31.27.155:22-139.178.89.65:46346.service: Deactivated successfully. Jul 2 00:46:07.111774 systemd-logind[1817]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:46:07.114515 systemd-logind[1817]: Removed session 15. Jul 2 00:46:12.134473 systemd[1]: Started sshd@15-172.31.27.155:22-139.178.89.65:35482.service. Jul 2 00:46:12.303693 sshd[4331]: Accepted publickey for core from 139.178.89.65 port 35482 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:12.306454 sshd[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:12.314913 systemd-logind[1817]: New session 16 of user core. Jul 2 00:46:12.316086 systemd[1]: Started session-16.scope. Jul 2 00:46:12.567497 sshd[4331]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:12.573000 systemd[1]: sshd@15-172.31.27.155:22-139.178.89.65:35482.service: Deactivated successfully. Jul 2 00:46:12.574981 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:46:12.576772 systemd-logind[1817]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:46:12.579846 systemd-logind[1817]: Removed session 16. Jul 2 00:46:12.599495 systemd[1]: Started sshd@16-172.31.27.155:22-139.178.89.65:35486.service. Jul 2 00:46:12.772007 sshd[4343]: Accepted publickey for core from 139.178.89.65 port 35486 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:12.775709 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:12.786490 systemd[1]: Started session-17.scope. Jul 2 00:46:12.787312 systemd-logind[1817]: New session 17 of user core. Jul 2 00:46:13.102768 sshd[4343]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:13.108456 systemd-logind[1817]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:46:13.109864 systemd[1]: sshd@16-172.31.27.155:22-139.178.89.65:35486.service: Deactivated successfully. Jul 2 00:46:13.111188 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:46:13.113151 systemd-logind[1817]: Removed session 17. Jul 2 00:46:13.134718 systemd[1]: Started sshd@17-172.31.27.155:22-139.178.89.65:35502.service. Jul 2 00:46:13.309244 sshd[4353]: Accepted publickey for core from 139.178.89.65 port 35502 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:13.314174 sshd[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:13.323704 systemd[1]: Started session-18.scope. Jul 2 00:46:13.324715 systemd-logind[1817]: New session 18 of user core. Jul 2 00:46:15.845762 sshd[4353]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:15.852372 systemd[1]: sshd@17-172.31.27.155:22-139.178.89.65:35502.service: Deactivated successfully. Jul 2 00:46:15.853746 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:46:15.854141 systemd-logind[1817]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:46:15.856623 systemd-logind[1817]: Removed session 18. Jul 2 00:46:15.875651 systemd[1]: Started sshd@18-172.31.27.155:22-139.178.89.65:35518.service. Jul 2 00:46:16.050289 sshd[4371]: Accepted publickey for core from 139.178.89.65 port 35518 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:16.053493 sshd[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:16.062381 systemd-logind[1817]: New session 19 of user core. Jul 2 00:46:16.062643 systemd[1]: Started session-19.scope. Jul 2 00:46:16.563979 sshd[4371]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:16.570282 systemd-logind[1817]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:46:16.570849 systemd[1]: sshd@18-172.31.27.155:22-139.178.89.65:35518.service: Deactivated successfully. Jul 2 00:46:16.572396 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:46:16.575085 systemd-logind[1817]: Removed session 19. Jul 2 00:46:16.595561 systemd[1]: Started sshd@19-172.31.27.155:22-139.178.89.65:35530.service. Jul 2 00:46:16.760525 sshd[4381]: Accepted publickey for core from 139.178.89.65 port 35530 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:16.763397 sshd[4381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:16.771989 systemd-logind[1817]: New session 20 of user core. Jul 2 00:46:16.773260 systemd[1]: Started session-20.scope. Jul 2 00:46:17.022197 sshd[4381]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:17.028545 systemd[1]: sshd@19-172.31.27.155:22-139.178.89.65:35530.service: Deactivated successfully. Jul 2 00:46:17.029803 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:46:17.031986 systemd-logind[1817]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:46:17.034432 systemd-logind[1817]: Removed session 20. Jul 2 00:46:22.054946 systemd[1]: Started sshd@20-172.31.27.155:22-139.178.89.65:56364.service. Jul 2 00:46:22.225599 sshd[4393]: Accepted publickey for core from 139.178.89.65 port 56364 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:22.229388 sshd[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:22.237628 systemd-logind[1817]: New session 21 of user core. Jul 2 00:46:22.238680 systemd[1]: Started session-21.scope. Jul 2 00:46:22.487771 sshd[4393]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:22.493994 systemd-logind[1817]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:46:22.494656 systemd[1]: sshd@20-172.31.27.155:22-139.178.89.65:56364.service: Deactivated successfully. Jul 2 00:46:22.495924 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:46:22.498949 systemd-logind[1817]: Removed session 21. Jul 2 00:46:27.519649 systemd[1]: Started sshd@21-172.31.27.155:22-139.178.89.65:56372.service. Jul 2 00:46:27.687837 sshd[4410]: Accepted publickey for core from 139.178.89.65 port 56372 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:27.690970 sshd[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:27.700566 systemd-logind[1817]: New session 22 of user core. Jul 2 00:46:27.701169 systemd[1]: Started session-22.scope. Jul 2 00:46:27.948073 sshd[4410]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:27.953572 systemd-logind[1817]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:46:27.954384 systemd[1]: sshd@21-172.31.27.155:22-139.178.89.65:56372.service: Deactivated successfully. Jul 2 00:46:27.955698 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:46:27.958496 systemd-logind[1817]: Removed session 22. Jul 2 00:46:32.977623 systemd[1]: Started sshd@22-172.31.27.155:22-139.178.89.65:60002.service. Jul 2 00:46:33.152261 sshd[4423]: Accepted publickey for core from 139.178.89.65 port 60002 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:33.155006 sshd[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:33.164519 systemd[1]: Started session-23.scope. Jul 2 00:46:33.166342 systemd-logind[1817]: New session 23 of user core. Jul 2 00:46:33.428708 sshd[4423]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:33.434205 systemd-logind[1817]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:46:33.434860 systemd[1]: sshd@22-172.31.27.155:22-139.178.89.65:60002.service: Deactivated successfully. Jul 2 00:46:33.436474 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:46:33.438789 systemd-logind[1817]: Removed session 23. Jul 2 00:46:38.461249 systemd[1]: Started sshd@23-172.31.27.155:22-139.178.89.65:41350.service. Jul 2 00:46:38.634284 sshd[4435]: Accepted publickey for core from 139.178.89.65 port 41350 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:38.637058 sshd[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:38.646387 systemd-logind[1817]: New session 24 of user core. Jul 2 00:46:38.646602 systemd[1]: Started session-24.scope. Jul 2 00:46:38.905802 sshd[4435]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:38.911191 systemd[1]: sshd@23-172.31.27.155:22-139.178.89.65:41350.service: Deactivated successfully. Jul 2 00:46:38.912689 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:46:38.914341 systemd-logind[1817]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:46:38.916996 systemd-logind[1817]: Removed session 24. Jul 2 00:46:38.934384 systemd[1]: Started sshd@24-172.31.27.155:22-139.178.89.65:41360.service. Jul 2 00:46:39.103621 sshd[4447]: Accepted publickey for core from 139.178.89.65 port 41360 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:39.106712 sshd[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:39.117159 systemd[1]: Started session-25.scope. Jul 2 00:46:39.117976 systemd-logind[1817]: New session 25 of user core. Jul 2 00:46:40.874780 kubelet[2880]: I0702 00:46:40.874729 2880 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-nsvgx" podStartSLOduration=105.874670652 podStartE2EDuration="1m45.874670652s" podCreationTimestamp="2024-07-02 00:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:45:30.177185983 +0000 UTC m=+49.757225292" watchObservedRunningTime="2024-07-02 00:46:40.874670652 +0000 UTC m=+120.454709877" Jul 2 00:46:40.936774 env[1826]: time="2024-07-02T00:46:40.936705344Z" level=info msg="StopContainer for \"12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7\" with timeout 30 (s)" Jul 2 00:46:40.937804 env[1826]: time="2024-07-02T00:46:40.937753766Z" level=info msg="Stop container \"12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7\" with signal terminated" Jul 2 00:46:40.961845 env[1826]: time="2024-07-02T00:46:40.961747223Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:46:40.971627 systemd[1]: cri-containerd-12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7.scope: Deactivated successfully. Jul 2 00:46:40.982417 env[1826]: time="2024-07-02T00:46:40.982365183Z" level=info msg="StopContainer for \"978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd\" with timeout 2 (s)" Jul 2 00:46:40.983483 env[1826]: time="2024-07-02T00:46:40.983432398Z" level=info msg="Stop container \"978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd\" with signal terminated" Jul 2 00:46:40.998751 systemd-networkd[1540]: lxc_health: Link DOWN Jul 2 00:46:40.998766 systemd-networkd[1540]: lxc_health: Lost carrier Jul 2 00:46:41.038342 systemd[1]: cri-containerd-978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd.scope: Deactivated successfully. Jul 2 00:46:41.038972 systemd[1]: cri-containerd-978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd.scope: Consumed 15.023s CPU time. Jul 2 00:46:41.055372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7-rootfs.mount: Deactivated successfully. Jul 2 00:46:41.081300 env[1826]: time="2024-07-02T00:46:41.081176800Z" level=info msg="shim disconnected" id=12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7 Jul 2 00:46:41.081707 env[1826]: time="2024-07-02T00:46:41.081657947Z" level=warning msg="cleaning up after shim disconnected" id=12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7 namespace=k8s.io Jul 2 00:46:41.081882 env[1826]: time="2024-07-02T00:46:41.081851599Z" level=info msg="cleaning up dead shim" Jul 2 00:46:41.111026 env[1826]: time="2024-07-02T00:46:41.109653770Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4511 runtime=io.containerd.runc.v2\n" Jul 2 00:46:41.109873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd-rootfs.mount: Deactivated successfully. Jul 2 00:46:41.115399 kubelet[2880]: E0702 00:46:41.115316 2880 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:46:41.119544 env[1826]: time="2024-07-02T00:46:41.119480642Z" level=info msg="StopContainer for \"12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7\" returns successfully" Jul 2 00:46:41.121016 env[1826]: time="2024-07-02T00:46:41.120917640Z" level=info msg="StopPodSandbox for \"40a21263282068f0864e8dcb17e7b3ee043461047dca97010c04bc3afbd74c2f\"" Jul 2 00:46:41.121185 env[1826]: time="2024-07-02T00:46:41.121067274Z" level=info msg="Container to stop \"12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:46:41.124922 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-40a21263282068f0864e8dcb17e7b3ee043461047dca97010c04bc3afbd74c2f-shm.mount: Deactivated successfully. Jul 2 00:46:41.129955 env[1826]: time="2024-07-02T00:46:41.129877118Z" level=info msg="shim disconnected" id=978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd Jul 2 00:46:41.129955 env[1826]: time="2024-07-02T00:46:41.129951473Z" level=warning msg="cleaning up after shim disconnected" id=978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd namespace=k8s.io Jul 2 00:46:41.130267 env[1826]: time="2024-07-02T00:46:41.129974562Z" level=info msg="cleaning up dead shim" Jul 2 00:46:41.144025 systemd[1]: cri-containerd-40a21263282068f0864e8dcb17e7b3ee043461047dca97010c04bc3afbd74c2f.scope: Deactivated successfully. Jul 2 00:46:41.169021 env[1826]: time="2024-07-02T00:46:41.168962852Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4531 runtime=io.containerd.runc.v2\n" Jul 2 00:46:41.173532 env[1826]: time="2024-07-02T00:46:41.173465554Z" level=info msg="StopContainer for \"978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd\" returns successfully" Jul 2 00:46:41.174562 env[1826]: time="2024-07-02T00:46:41.174504892Z" level=info msg="StopPodSandbox for \"3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2\"" Jul 2 00:46:41.174919 env[1826]: time="2024-07-02T00:46:41.174874207Z" level=info msg="Container to stop \"0911de6592a5c6e6735ef746ad62da4a001d44c7eb80279a7f79a84300119557\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:46:41.175127 env[1826]: time="2024-07-02T00:46:41.175091871Z" level=info msg="Container to stop \"08d2dd4876c9898c78b9ea53d92022d472516e1fc6fb1f1f1c799ab94e8de139\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:46:41.175397 env[1826]: time="2024-07-02T00:46:41.175348394Z" level=info msg="Container to stop \"139872947d6d4c3cd1a95cf7f2424ca3ea4d49dcc8ad47e13f68659aaec9c2e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:46:41.175824 env[1826]: time="2024-07-02T00:46:41.175762639Z" level=info msg="Container to stop \"01180be8718645ff9004e172602a715f57f0dffe592b696e9cdac6e237ae7a8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:46:41.176031 env[1826]: time="2024-07-02T00:46:41.175985944Z" level=info msg="Container to stop \"978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:46:41.179675 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2-shm.mount: Deactivated successfully. Jul 2 00:46:41.194972 systemd[1]: cri-containerd-3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2.scope: Deactivated successfully. Jul 2 00:46:41.247370 env[1826]: time="2024-07-02T00:46:41.247297083Z" level=info msg="shim disconnected" id=40a21263282068f0864e8dcb17e7b3ee043461047dca97010c04bc3afbd74c2f Jul 2 00:46:41.247370 env[1826]: time="2024-07-02T00:46:41.247372962Z" level=warning msg="cleaning up after shim disconnected" id=40a21263282068f0864e8dcb17e7b3ee043461047dca97010c04bc3afbd74c2f namespace=k8s.io Jul 2 00:46:41.247716 env[1826]: time="2024-07-02T00:46:41.247395558Z" level=info msg="cleaning up dead shim" Jul 2 00:46:41.260931 env[1826]: time="2024-07-02T00:46:41.260844945Z" level=info msg="shim disconnected" id=3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2 Jul 2 00:46:41.261401 env[1826]: time="2024-07-02T00:46:41.261349518Z" level=warning msg="cleaning up after shim disconnected" id=3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2 namespace=k8s.io Jul 2 00:46:41.261958 env[1826]: time="2024-07-02T00:46:41.261891340Z" level=info msg="cleaning up dead shim" Jul 2 00:46:41.272832 env[1826]: time="2024-07-02T00:46:41.272752330Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4581 runtime=io.containerd.runc.v2\n" Jul 2 00:46:41.273464 env[1826]: time="2024-07-02T00:46:41.273395688Z" level=info msg="TearDown network for sandbox \"40a21263282068f0864e8dcb17e7b3ee043461047dca97010c04bc3afbd74c2f\" successfully" Jul 2 00:46:41.273464 env[1826]: time="2024-07-02T00:46:41.273455199Z" level=info msg="StopPodSandbox for \"40a21263282068f0864e8dcb17e7b3ee043461047dca97010c04bc3afbd74c2f\" returns successfully" Jul 2 00:46:41.292269 env[1826]: time="2024-07-02T00:46:41.291334648Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4589 runtime=io.containerd.runc.v2\n" Jul 2 00:46:41.292269 env[1826]: time="2024-07-02T00:46:41.291973350Z" level=info msg="TearDown network for sandbox \"3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2\" successfully" Jul 2 00:46:41.292269 env[1826]: time="2024-07-02T00:46:41.292016924Z" level=info msg="StopPodSandbox for \"3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2\" returns successfully" Jul 2 00:46:41.326428 kubelet[2880]: I0702 00:46:41.326365 2880 scope.go:117] "RemoveContainer" containerID="12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7" Jul 2 00:46:41.331414 env[1826]: time="2024-07-02T00:46:41.331329035Z" level=info msg="RemoveContainer for \"12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7\"" Jul 2 00:46:41.344974 env[1826]: time="2024-07-02T00:46:41.343975998Z" level=info msg="RemoveContainer for \"12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7\" returns successfully" Jul 2 00:46:41.347650 kubelet[2880]: I0702 00:46:41.347605 2880 scope.go:117] "RemoveContainer" containerID="12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7" Jul 2 00:46:41.353693 env[1826]: time="2024-07-02T00:46:41.353580273Z" level=error msg="ContainerStatus for \"12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7\": not found" Jul 2 00:46:41.354367 kubelet[2880]: E0702 00:46:41.354330 2880 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7\": not found" containerID="12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7" Jul 2 00:46:41.355194 kubelet[2880]: I0702 00:46:41.355075 2880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7"} err="failed to get container status \"12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"12c7f166d51447ea3cc21e72f344abd8da48bfc7cc4b1b9a1841e046115d17f7\": not found" Jul 2 00:46:41.357088 kubelet[2880]: I0702 00:46:41.357053 2880 scope.go:117] "RemoveContainer" containerID="978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd" Jul 2 00:46:41.363550 env[1826]: time="2024-07-02T00:46:41.363341671Z" level=info msg="RemoveContainer for \"978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd\"" Jul 2 00:46:41.369845 env[1826]: time="2024-07-02T00:46:41.369789100Z" level=info msg="RemoveContainer for \"978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd\" returns successfully" Jul 2 00:46:41.370409 kubelet[2880]: I0702 00:46:41.370370 2880 scope.go:117] "RemoveContainer" containerID="139872947d6d4c3cd1a95cf7f2424ca3ea4d49dcc8ad47e13f68659aaec9c2e5" Jul 2 00:46:41.372444 env[1826]: time="2024-07-02T00:46:41.372397153Z" level=info msg="RemoveContainer for \"139872947d6d4c3cd1a95cf7f2424ca3ea4d49dcc8ad47e13f68659aaec9c2e5\"" Jul 2 00:46:41.372605 kubelet[2880]: I0702 00:46:41.372438 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-hostproc\") pod \"22e994fd-fdbe-4ff0-9c17-0294b010211e\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " Jul 2 00:46:41.372605 kubelet[2880]: I0702 00:46:41.372534 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-cilium-run\") pod \"22e994fd-fdbe-4ff0-9c17-0294b010211e\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " Jul 2 00:46:41.372742 kubelet[2880]: I0702 00:46:41.372616 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22e994fd-fdbe-4ff0-9c17-0294b010211e-cilium-config-path\") pod \"22e994fd-fdbe-4ff0-9c17-0294b010211e\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " Jul 2 00:46:41.372742 kubelet[2880]: I0702 00:46:41.372695 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/22e994fd-fdbe-4ff0-9c17-0294b010211e-clustermesh-secrets\") pod \"22e994fd-fdbe-4ff0-9c17-0294b010211e\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " Jul 2 00:46:41.372742 kubelet[2880]: I0702 00:46:41.372739 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-bpf-maps\") pod \"22e994fd-fdbe-4ff0-9c17-0294b010211e\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " Jul 2 00:46:41.372932 kubelet[2880]: I0702 00:46:41.372809 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-host-proc-sys-kernel\") pod \"22e994fd-fdbe-4ff0-9c17-0294b010211e\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " Jul 2 00:46:41.372932 kubelet[2880]: I0702 00:46:41.372917 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-xtables-lock\") pod \"22e994fd-fdbe-4ff0-9c17-0294b010211e\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " Jul 2 00:46:41.373067 kubelet[2880]: I0702 00:46:41.372994 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-lib-modules\") pod \"22e994fd-fdbe-4ff0-9c17-0294b010211e\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " Jul 2 00:46:41.373152 kubelet[2880]: I0702 00:46:41.373074 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-cni-path\") pod \"22e994fd-fdbe-4ff0-9c17-0294b010211e\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " Jul 2 00:46:41.373258 kubelet[2880]: I0702 00:46:41.373153 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-host-proc-sys-net\") pod \"22e994fd-fdbe-4ff0-9c17-0294b010211e\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " Jul 2 00:46:41.373258 kubelet[2880]: I0702 00:46:41.373246 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b71849e8-2197-4ac8-998e-e69d70edb273-cilium-config-path\") pod \"b71849e8-2197-4ac8-998e-e69d70edb273\" (UID: \"b71849e8-2197-4ac8-998e-e69d70edb273\") " Jul 2 00:46:41.373424 kubelet[2880]: I0702 00:46:41.373379 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/22e994fd-fdbe-4ff0-9c17-0294b010211e-hubble-tls\") pod \"22e994fd-fdbe-4ff0-9c17-0294b010211e\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " Jul 2 00:46:41.373493 kubelet[2880]: I0702 00:46:41.373458 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-cilium-cgroup\") pod \"22e994fd-fdbe-4ff0-9c17-0294b010211e\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " Jul 2 00:46:41.373566 kubelet[2880]: I0702 00:46:41.373532 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-etc-cni-netd\") pod \"22e994fd-fdbe-4ff0-9c17-0294b010211e\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " Jul 2 00:46:41.373678 kubelet[2880]: I0702 00:46:41.373608 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hqw6\" (UniqueName: \"kubernetes.io/projected/22e994fd-fdbe-4ff0-9c17-0294b010211e-kube-api-access-8hqw6\") pod \"22e994fd-fdbe-4ff0-9c17-0294b010211e\" (UID: \"22e994fd-fdbe-4ff0-9c17-0294b010211e\") " Jul 2 00:46:41.373678 kubelet[2880]: I0702 00:46:41.373661 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd6h8\" (UniqueName: \"kubernetes.io/projected/b71849e8-2197-4ac8-998e-e69d70edb273-kube-api-access-kd6h8\") pod \"b71849e8-2197-4ac8-998e-e69d70edb273\" (UID: \"b71849e8-2197-4ac8-998e-e69d70edb273\") " Jul 2 00:46:41.374844 kubelet[2880]: I0702 00:46:41.374800 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "22e994fd-fdbe-4ff0-9c17-0294b010211e" (UID: "22e994fd-fdbe-4ff0-9c17-0294b010211e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:41.375518 kubelet[2880]: I0702 00:46:41.375447 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-hostproc" (OuterVolumeSpecName: "hostproc") pod "22e994fd-fdbe-4ff0-9c17-0294b010211e" (UID: "22e994fd-fdbe-4ff0-9c17-0294b010211e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:41.375865 kubelet[2880]: I0702 00:46:41.375834 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "22e994fd-fdbe-4ff0-9c17-0294b010211e" (UID: "22e994fd-fdbe-4ff0-9c17-0294b010211e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:41.379709 env[1826]: time="2024-07-02T00:46:41.379562306Z" level=info msg="RemoveContainer for \"139872947d6d4c3cd1a95cf7f2424ca3ea4d49dcc8ad47e13f68659aaec9c2e5\" returns successfully" Jul 2 00:46:41.384054 kubelet[2880]: I0702 00:46:41.383981 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b71849e8-2197-4ac8-998e-e69d70edb273-kube-api-access-kd6h8" (OuterVolumeSpecName: "kube-api-access-kd6h8") pod "b71849e8-2197-4ac8-998e-e69d70edb273" (UID: "b71849e8-2197-4ac8-998e-e69d70edb273"). InnerVolumeSpecName "kube-api-access-kd6h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:46:41.384337 kubelet[2880]: I0702 00:46:41.384100 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-cni-path" (OuterVolumeSpecName: "cni-path") pod "22e994fd-fdbe-4ff0-9c17-0294b010211e" (UID: "22e994fd-fdbe-4ff0-9c17-0294b010211e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:41.384337 kubelet[2880]: I0702 00:46:41.384151 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "22e994fd-fdbe-4ff0-9c17-0294b010211e" (UID: "22e994fd-fdbe-4ff0-9c17-0294b010211e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:41.384771 kubelet[2880]: I0702 00:46:41.384732 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22e994fd-fdbe-4ff0-9c17-0294b010211e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "22e994fd-fdbe-4ff0-9c17-0294b010211e" (UID: "22e994fd-fdbe-4ff0-9c17-0294b010211e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:46:41.390531 kubelet[2880]: I0702 00:46:41.390416 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b71849e8-2197-4ac8-998e-e69d70edb273-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b71849e8-2197-4ac8-998e-e69d70edb273" (UID: "b71849e8-2197-4ac8-998e-e69d70edb273"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:46:41.390833 kubelet[2880]: I0702 00:46:41.390793 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22e994fd-fdbe-4ff0-9c17-0294b010211e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "22e994fd-fdbe-4ff0-9c17-0294b010211e" (UID: "22e994fd-fdbe-4ff0-9c17-0294b010211e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:46:41.391009 kubelet[2880]: I0702 00:46:41.390978 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "22e994fd-fdbe-4ff0-9c17-0294b010211e" (UID: "22e994fd-fdbe-4ff0-9c17-0294b010211e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:41.391187 kubelet[2880]: I0702 00:46:41.391153 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "22e994fd-fdbe-4ff0-9c17-0294b010211e" (UID: "22e994fd-fdbe-4ff0-9c17-0294b010211e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:41.391403 kubelet[2880]: I0702 00:46:41.391375 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "22e994fd-fdbe-4ff0-9c17-0294b010211e" (UID: "22e994fd-fdbe-4ff0-9c17-0294b010211e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:41.391692 kubelet[2880]: I0702 00:46:41.391664 2880 scope.go:117] "RemoveContainer" containerID="08d2dd4876c9898c78b9ea53d92022d472516e1fc6fb1f1f1c799ab94e8de139" Jul 2 00:46:41.392887 kubelet[2880]: I0702 00:46:41.392816 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "22e994fd-fdbe-4ff0-9c17-0294b010211e" (UID: "22e994fd-fdbe-4ff0-9c17-0294b010211e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:41.393197 kubelet[2880]: I0702 00:46:41.393166 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "22e994fd-fdbe-4ff0-9c17-0294b010211e" (UID: "22e994fd-fdbe-4ff0-9c17-0294b010211e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:41.394593 env[1826]: time="2024-07-02T00:46:41.394081524Z" level=info msg="RemoveContainer for \"08d2dd4876c9898c78b9ea53d92022d472516e1fc6fb1f1f1c799ab94e8de139\"" Jul 2 00:46:41.399271 kubelet[2880]: I0702 00:46:41.398202 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22e994fd-fdbe-4ff0-9c17-0294b010211e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "22e994fd-fdbe-4ff0-9c17-0294b010211e" (UID: "22e994fd-fdbe-4ff0-9c17-0294b010211e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:46:41.399881 env[1826]: time="2024-07-02T00:46:41.399823936Z" level=info msg="RemoveContainer for \"08d2dd4876c9898c78b9ea53d92022d472516e1fc6fb1f1f1c799ab94e8de139\" returns successfully" Jul 2 00:46:41.400430 kubelet[2880]: I0702 00:46:41.400392 2880 scope.go:117] "RemoveContainer" containerID="01180be8718645ff9004e172602a715f57f0dffe592b696e9cdac6e237ae7a8b" Jul 2 00:46:41.403832 env[1826]: time="2024-07-02T00:46:41.403723974Z" level=info msg="RemoveContainer for \"01180be8718645ff9004e172602a715f57f0dffe592b696e9cdac6e237ae7a8b\"" Jul 2 00:46:41.406348 kubelet[2880]: I0702 00:46:41.405873 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22e994fd-fdbe-4ff0-9c17-0294b010211e-kube-api-access-8hqw6" (OuterVolumeSpecName: "kube-api-access-8hqw6") pod "22e994fd-fdbe-4ff0-9c17-0294b010211e" (UID: "22e994fd-fdbe-4ff0-9c17-0294b010211e"). InnerVolumeSpecName "kube-api-access-8hqw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:46:41.410689 env[1826]: time="2024-07-02T00:46:41.409266325Z" level=info msg="RemoveContainer for \"01180be8718645ff9004e172602a715f57f0dffe592b696e9cdac6e237ae7a8b\" returns successfully" Jul 2 00:46:41.411530 kubelet[2880]: I0702 00:46:41.411490 2880 scope.go:117] "RemoveContainer" containerID="0911de6592a5c6e6735ef746ad62da4a001d44c7eb80279a7f79a84300119557" Jul 2 00:46:41.415820 env[1826]: time="2024-07-02T00:46:41.415411569Z" level=info msg="RemoveContainer for \"0911de6592a5c6e6735ef746ad62da4a001d44c7eb80279a7f79a84300119557\"" Jul 2 00:46:41.420394 env[1826]: time="2024-07-02T00:46:41.420332248Z" level=info msg="RemoveContainer for \"0911de6592a5c6e6735ef746ad62da4a001d44c7eb80279a7f79a84300119557\" returns successfully" Jul 2 00:46:41.420952 kubelet[2880]: I0702 00:46:41.420915 2880 scope.go:117] "RemoveContainer" containerID="978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd" Jul 2 00:46:41.421956 env[1826]: time="2024-07-02T00:46:41.421454521Z" level=error msg="ContainerStatus for \"978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd\": not found" Jul 2 00:46:41.422612 kubelet[2880]: E0702 00:46:41.422433 2880 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd\": not found" containerID="978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd" Jul 2 00:46:41.422775 kubelet[2880]: I0702 00:46:41.422694 2880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd"} err="failed to get container status \"978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"978192eb48ee80f665dc3c744a6fe96f25f1c8c2236d55b90cf5369c32eb61dd\": not found" Jul 2 00:46:41.422775 kubelet[2880]: I0702 00:46:41.422729 2880 scope.go:117] "RemoveContainer" containerID="139872947d6d4c3cd1a95cf7f2424ca3ea4d49dcc8ad47e13f68659aaec9c2e5" Jul 2 00:46:41.423251 env[1826]: time="2024-07-02T00:46:41.423128433Z" level=error msg="ContainerStatus for \"139872947d6d4c3cd1a95cf7f2424ca3ea4d49dcc8ad47e13f68659aaec9c2e5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"139872947d6d4c3cd1a95cf7f2424ca3ea4d49dcc8ad47e13f68659aaec9c2e5\": not found" Jul 2 00:46:41.423564 kubelet[2880]: E0702 00:46:41.423533 2880 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"139872947d6d4c3cd1a95cf7f2424ca3ea4d49dcc8ad47e13f68659aaec9c2e5\": not found" containerID="139872947d6d4c3cd1a95cf7f2424ca3ea4d49dcc8ad47e13f68659aaec9c2e5" Jul 2 00:46:41.423801 kubelet[2880]: I0702 00:46:41.423769 2880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"139872947d6d4c3cd1a95cf7f2424ca3ea4d49dcc8ad47e13f68659aaec9c2e5"} err="failed to get container status \"139872947d6d4c3cd1a95cf7f2424ca3ea4d49dcc8ad47e13f68659aaec9c2e5\": rpc error: code = NotFound desc = an error occurred when try to find container \"139872947d6d4c3cd1a95cf7f2424ca3ea4d49dcc8ad47e13f68659aaec9c2e5\": not found" Jul 2 00:46:41.423944 kubelet[2880]: I0702 00:46:41.423921 2880 scope.go:117] "RemoveContainer" containerID="08d2dd4876c9898c78b9ea53d92022d472516e1fc6fb1f1f1c799ab94e8de139" Jul 2 00:46:41.424612 env[1826]: time="2024-07-02T00:46:41.424449170Z" level=error msg="ContainerStatus for \"08d2dd4876c9898c78b9ea53d92022d472516e1fc6fb1f1f1c799ab94e8de139\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"08d2dd4876c9898c78b9ea53d92022d472516e1fc6fb1f1f1c799ab94e8de139\": not found" Jul 2 00:46:41.424956 kubelet[2880]: E0702 00:46:41.424929 2880 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"08d2dd4876c9898c78b9ea53d92022d472516e1fc6fb1f1f1c799ab94e8de139\": not found" containerID="08d2dd4876c9898c78b9ea53d92022d472516e1fc6fb1f1f1c799ab94e8de139" Jul 2 00:46:41.425114 kubelet[2880]: I0702 00:46:41.425090 2880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"08d2dd4876c9898c78b9ea53d92022d472516e1fc6fb1f1f1c799ab94e8de139"} err="failed to get container status \"08d2dd4876c9898c78b9ea53d92022d472516e1fc6fb1f1f1c799ab94e8de139\": rpc error: code = NotFound desc = an error occurred when try to find container \"08d2dd4876c9898c78b9ea53d92022d472516e1fc6fb1f1f1c799ab94e8de139\": not found" Jul 2 00:46:41.425333 kubelet[2880]: I0702 00:46:41.425303 2880 scope.go:117] "RemoveContainer" containerID="01180be8718645ff9004e172602a715f57f0dffe592b696e9cdac6e237ae7a8b" Jul 2 00:46:41.425963 env[1826]: time="2024-07-02T00:46:41.425865984Z" level=error msg="ContainerStatus for \"01180be8718645ff9004e172602a715f57f0dffe592b696e9cdac6e237ae7a8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"01180be8718645ff9004e172602a715f57f0dffe592b696e9cdac6e237ae7a8b\": not found" Jul 2 00:46:41.426355 kubelet[2880]: E0702 00:46:41.426322 2880 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"01180be8718645ff9004e172602a715f57f0dffe592b696e9cdac6e237ae7a8b\": not found" containerID="01180be8718645ff9004e172602a715f57f0dffe592b696e9cdac6e237ae7a8b" Jul 2 00:46:41.426462 kubelet[2880]: I0702 00:46:41.426386 2880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"01180be8718645ff9004e172602a715f57f0dffe592b696e9cdac6e237ae7a8b"} err="failed to get container status \"01180be8718645ff9004e172602a715f57f0dffe592b696e9cdac6e237ae7a8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"01180be8718645ff9004e172602a715f57f0dffe592b696e9cdac6e237ae7a8b\": not found" Jul 2 00:46:41.426462 kubelet[2880]: I0702 00:46:41.426434 2880 scope.go:117] "RemoveContainer" containerID="0911de6592a5c6e6735ef746ad62da4a001d44c7eb80279a7f79a84300119557" Jul 2 00:46:41.426902 env[1826]: time="2024-07-02T00:46:41.426811118Z" level=error msg="ContainerStatus for \"0911de6592a5c6e6735ef746ad62da4a001d44c7eb80279a7f79a84300119557\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0911de6592a5c6e6735ef746ad62da4a001d44c7eb80279a7f79a84300119557\": not found" Jul 2 00:46:41.427177 kubelet[2880]: E0702 00:46:41.427147 2880 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0911de6592a5c6e6735ef746ad62da4a001d44c7eb80279a7f79a84300119557\": not found" containerID="0911de6592a5c6e6735ef746ad62da4a001d44c7eb80279a7f79a84300119557" Jul 2 00:46:41.427422 kubelet[2880]: I0702 00:46:41.427398 2880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0911de6592a5c6e6735ef746ad62da4a001d44c7eb80279a7f79a84300119557"} err="failed to get container status \"0911de6592a5c6e6735ef746ad62da4a001d44c7eb80279a7f79a84300119557\": rpc error: code = NotFound desc = an error occurred when try to find container \"0911de6592a5c6e6735ef746ad62da4a001d44c7eb80279a7f79a84300119557\": not found" Jul 2 00:46:41.474990 kubelet[2880]: I0702 00:46:41.474948 2880 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/22e994fd-fdbe-4ff0-9c17-0294b010211e-clustermesh-secrets\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:41.475292 kubelet[2880]: I0702 00:46:41.475269 2880 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-bpf-maps\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:41.475441 kubelet[2880]: I0702 00:46:41.475420 2880 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-host-proc-sys-kernel\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:41.475564 kubelet[2880]: I0702 00:46:41.475545 2880 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-xtables-lock\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:41.475691 kubelet[2880]: I0702 00:46:41.475671 2880 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-lib-modules\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:41.475813 kubelet[2880]: I0702 00:46:41.475793 2880 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-host-proc-sys-net\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:41.475927 kubelet[2880]: I0702 00:46:41.475908 2880 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b71849e8-2197-4ac8-998e-e69d70edb273-cilium-config-path\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:41.476044 kubelet[2880]: I0702 00:46:41.476024 2880 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-cni-path\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:41.476180 kubelet[2880]: I0702 00:46:41.476147 2880 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/22e994fd-fdbe-4ff0-9c17-0294b010211e-hubble-tls\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:41.476387 kubelet[2880]: I0702 00:46:41.476366 2880 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-cilium-cgroup\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:41.476503 kubelet[2880]: I0702 00:46:41.476484 2880 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-etc-cni-netd\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:41.476631 kubelet[2880]: I0702 00:46:41.476611 2880 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8hqw6\" (UniqueName: \"kubernetes.io/projected/22e994fd-fdbe-4ff0-9c17-0294b010211e-kube-api-access-8hqw6\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:41.476748 kubelet[2880]: I0702 00:46:41.476726 2880 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kd6h8\" (UniqueName: \"kubernetes.io/projected/b71849e8-2197-4ac8-998e-e69d70edb273-kube-api-access-kd6h8\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:41.476872 kubelet[2880]: I0702 00:46:41.476851 2880 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-cilium-run\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:41.476993 kubelet[2880]: I0702 00:46:41.476973 2880 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22e994fd-fdbe-4ff0-9c17-0294b010211e-cilium-config-path\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:41.477113 kubelet[2880]: I0702 00:46:41.477093 2880 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/22e994fd-fdbe-4ff0-9c17-0294b010211e-hostproc\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:41.639312 systemd[1]: Removed slice kubepods-besteffort-podb71849e8_2197_4ac8_998e_e69d70edb273.slice. Jul 2 00:46:41.650997 systemd[1]: Removed slice kubepods-burstable-pod22e994fd_fdbe_4ff0_9c17_0294b010211e.slice. Jul 2 00:46:41.651185 systemd[1]: kubepods-burstable-pod22e994fd_fdbe_4ff0_9c17_0294b010211e.slice: Consumed 15.258s CPU time. Jul 2 00:46:41.904539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3775b1d6a934deaa0a7d24ef1e5faa3ef961acf6f685863231434ba0c9fb24b2-rootfs.mount: Deactivated successfully. Jul 2 00:46:41.904733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40a21263282068f0864e8dcb17e7b3ee043461047dca97010c04bc3afbd74c2f-rootfs.mount: Deactivated successfully. Jul 2 00:46:41.904866 systemd[1]: var-lib-kubelet-pods-b71849e8\x2d2197\x2d4ac8\x2d998e\x2de69d70edb273-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkd6h8.mount: Deactivated successfully. Jul 2 00:46:41.905004 systemd[1]: var-lib-kubelet-pods-22e994fd\x2dfdbe\x2d4ff0\x2d9c17\x2d0294b010211e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8hqw6.mount: Deactivated successfully. Jul 2 00:46:41.905146 systemd[1]: var-lib-kubelet-pods-22e994fd\x2dfdbe\x2d4ff0\x2d9c17\x2d0294b010211e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:46:41.905311 systemd[1]: var-lib-kubelet-pods-22e994fd\x2dfdbe\x2d4ff0\x2d9c17\x2d0294b010211e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:46:42.752093 kubelet[2880]: I0702 00:46:42.752056 2880 setters.go:568] "Node became not ready" node="ip-172-31-27-155" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T00:46:42Z","lastTransitionTime":"2024-07-02T00:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 00:46:42.774711 kubelet[2880]: E0702 00:46:42.774665 2880 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-nsvgx" podUID="d2c99726-a471-4d80-95b5-fa84d890b0cc" Jul 2 00:46:42.780043 kubelet[2880]: I0702 00:46:42.780000 2880 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="22e994fd-fdbe-4ff0-9c17-0294b010211e" path="/var/lib/kubelet/pods/22e994fd-fdbe-4ff0-9c17-0294b010211e/volumes" Jul 2 00:46:42.781886 kubelet[2880]: I0702 00:46:42.781846 2880 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b71849e8-2197-4ac8-998e-e69d70edb273" path="/var/lib/kubelet/pods/b71849e8-2197-4ac8-998e-e69d70edb273/volumes" Jul 2 00:46:42.839585 sshd[4447]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:42.845002 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:46:42.845394 systemd[1]: session-25.scope: Consumed 1.021s CPU time. Jul 2 00:46:42.846957 systemd[1]: sshd@24-172.31.27.155:22-139.178.89.65:41360.service: Deactivated successfully. Jul 2 00:46:42.847355 systemd-logind[1817]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:46:42.850325 systemd-logind[1817]: Removed session 25. Jul 2 00:46:42.868291 systemd[1]: Started sshd@25-172.31.27.155:22-139.178.89.65:41370.service. Jul 2 00:46:43.038144 sshd[4615]: Accepted publickey for core from 139.178.89.65 port 41370 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:43.041497 sshd[4615]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:43.050927 systemd[1]: Started session-26.scope. Jul 2 00:46:43.052002 systemd-logind[1817]: New session 26 of user core. Jul 2 00:46:44.497013 sshd[4615]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:44.503741 systemd[1]: sshd@25-172.31.27.155:22-139.178.89.65:41370.service: Deactivated successfully. Jul 2 00:46:44.505110 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:46:44.505837 systemd-logind[1817]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:46:44.507151 systemd[1]: session-26.scope: Consumed 1.224s CPU time. Jul 2 00:46:44.508679 systemd-logind[1817]: Removed session 26. Jul 2 00:46:44.526628 systemd[1]: Started sshd@26-172.31.27.155:22-139.178.89.65:41376.service. Jul 2 00:46:44.558396 kubelet[2880]: I0702 00:46:44.558333 2880 topology_manager.go:215] "Topology Admit Handler" podUID="90b7b7f7-f2b5-4edf-938e-abd2a883a562" podNamespace="kube-system" podName="cilium-v8nk9" Jul 2 00:46:44.558951 kubelet[2880]: E0702 00:46:44.558457 2880 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="22e994fd-fdbe-4ff0-9c17-0294b010211e" containerName="mount-cgroup" Jul 2 00:46:44.558951 kubelet[2880]: E0702 00:46:44.558483 2880 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="22e994fd-fdbe-4ff0-9c17-0294b010211e" containerName="clean-cilium-state" Jul 2 00:46:44.558951 kubelet[2880]: E0702 00:46:44.558503 2880 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="22e994fd-fdbe-4ff0-9c17-0294b010211e" containerName="cilium-agent" Jul 2 00:46:44.558951 kubelet[2880]: E0702 00:46:44.558522 2880 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b71849e8-2197-4ac8-998e-e69d70edb273" containerName="cilium-operator" Jul 2 00:46:44.558951 kubelet[2880]: E0702 00:46:44.558543 2880 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="22e994fd-fdbe-4ff0-9c17-0294b010211e" containerName="apply-sysctl-overwrites" Jul 2 00:46:44.558951 kubelet[2880]: E0702 00:46:44.558562 2880 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="22e994fd-fdbe-4ff0-9c17-0294b010211e" containerName="mount-bpf-fs" Jul 2 00:46:44.558951 kubelet[2880]: I0702 00:46:44.558609 2880 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71849e8-2197-4ac8-998e-e69d70edb273" containerName="cilium-operator" Jul 2 00:46:44.558951 kubelet[2880]: I0702 00:46:44.558627 2880 memory_manager.go:354] "RemoveStaleState removing state" podUID="22e994fd-fdbe-4ff0-9c17-0294b010211e" containerName="cilium-agent" Jul 2 00:46:44.575586 systemd[1]: Created slice kubepods-burstable-pod90b7b7f7_f2b5_4edf_938e_abd2a883a562.slice. Jul 2 00:46:44.597358 kubelet[2880]: I0702 00:46:44.597280 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-xtables-lock\") pod \"cilium-v8nk9\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " pod="kube-system/cilium-v8nk9" Jul 2 00:46:44.597515 kubelet[2880]: I0702 00:46:44.597383 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-host-proc-sys-kernel\") pod \"cilium-v8nk9\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " pod="kube-system/cilium-v8nk9" Jul 2 00:46:44.597515 kubelet[2880]: I0702 00:46:44.597459 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-cilium-cgroup\") pod \"cilium-v8nk9\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " pod="kube-system/cilium-v8nk9" Jul 2 00:46:44.597634 kubelet[2880]: I0702 00:46:44.597531 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-lib-modules\") pod \"cilium-v8nk9\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " pod="kube-system/cilium-v8nk9" Jul 2 00:46:44.597634 kubelet[2880]: I0702 00:46:44.597608 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-etc-cni-netd\") pod \"cilium-v8nk9\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " pod="kube-system/cilium-v8nk9" Jul 2 00:46:44.597776 kubelet[2880]: I0702 00:46:44.597682 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whm6r\" (UniqueName: \"kubernetes.io/projected/90b7b7f7-f2b5-4edf-938e-abd2a883a562-kube-api-access-whm6r\") pod \"cilium-v8nk9\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " pod="kube-system/cilium-v8nk9" Jul 2 00:46:44.597776 kubelet[2880]: I0702 00:46:44.597733 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90b7b7f7-f2b5-4edf-938e-abd2a883a562-clustermesh-secrets\") pod \"cilium-v8nk9\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " pod="kube-system/cilium-v8nk9" Jul 2 00:46:44.597903 kubelet[2880]: I0702 00:46:44.597806 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90b7b7f7-f2b5-4edf-938e-abd2a883a562-cilium-config-path\") pod \"cilium-v8nk9\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " pod="kube-system/cilium-v8nk9" Jul 2 00:46:44.597903 kubelet[2880]: I0702 00:46:44.597877 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90b7b7f7-f2b5-4edf-938e-abd2a883a562-hubble-tls\") pod \"cilium-v8nk9\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " pod="kube-system/cilium-v8nk9" Jul 2 00:46:44.598015 kubelet[2880]: I0702 00:46:44.597946 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-hostproc\") pod \"cilium-v8nk9\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " pod="kube-system/cilium-v8nk9" Jul 2 00:46:44.598086 kubelet[2880]: I0702 00:46:44.597998 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-cilium-run\") pod \"cilium-v8nk9\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " pod="kube-system/cilium-v8nk9" Jul 2 00:46:44.598086 kubelet[2880]: I0702 00:46:44.598068 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-cni-path\") pod \"cilium-v8nk9\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " pod="kube-system/cilium-v8nk9" Jul 2 00:46:44.598194 kubelet[2880]: I0702 00:46:44.598138 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-bpf-maps\") pod \"cilium-v8nk9\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " pod="kube-system/cilium-v8nk9" Jul 2 00:46:44.598310 kubelet[2880]: I0702 00:46:44.598219 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/90b7b7f7-f2b5-4edf-938e-abd2a883a562-cilium-ipsec-secrets\") pod \"cilium-v8nk9\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " pod="kube-system/cilium-v8nk9" Jul 2 00:46:44.598310 kubelet[2880]: I0702 00:46:44.598291 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-host-proc-sys-net\") pod \"cilium-v8nk9\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " pod="kube-system/cilium-v8nk9" Jul 2 00:46:44.734273 sshd[4625]: Accepted publickey for core from 139.178.89.65 port 41376 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:44.740642 sshd[4625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:44.763316 systemd-logind[1817]: New session 27 of user core. Jul 2 00:46:44.764780 systemd[1]: Started session-27.scope. Jul 2 00:46:44.774891 kubelet[2880]: E0702 00:46:44.774841 2880 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-nsvgx" podUID="d2c99726-a471-4d80-95b5-fa84d890b0cc" Jul 2 00:46:44.881392 env[1826]: time="2024-07-02T00:46:44.881021982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v8nk9,Uid:90b7b7f7-f2b5-4edf-938e-abd2a883a562,Namespace:kube-system,Attempt:0,}" Jul 2 00:46:44.916900 env[1826]: time="2024-07-02T00:46:44.915672086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:46:44.916900 env[1826]: time="2024-07-02T00:46:44.915744161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:46:44.916900 env[1826]: time="2024-07-02T00:46:44.915769446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:46:44.916900 env[1826]: time="2024-07-02T00:46:44.916010284Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9b8975bd64a51cce0c61a4451e241b7bca6ae2770c627729866361e4a99a827a pid=4646 runtime=io.containerd.runc.v2 Jul 2 00:46:44.938545 systemd[1]: Started cri-containerd-9b8975bd64a51cce0c61a4451e241b7bca6ae2770c627729866361e4a99a827a.scope. Jul 2 00:46:45.016890 env[1826]: time="2024-07-02T00:46:45.016746577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v8nk9,Uid:90b7b7f7-f2b5-4edf-938e-abd2a883a562,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b8975bd64a51cce0c61a4451e241b7bca6ae2770c627729866361e4a99a827a\"" Jul 2 00:46:45.025656 env[1826]: time="2024-07-02T00:46:45.025598795Z" level=info msg="CreateContainer within sandbox \"9b8975bd64a51cce0c61a4451e241b7bca6ae2770c627729866361e4a99a827a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:46:45.059557 env[1826]: time="2024-07-02T00:46:45.059492400Z" level=info msg="CreateContainer within sandbox \"9b8975bd64a51cce0c61a4451e241b7bca6ae2770c627729866361e4a99a827a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a6033f6e40c738400b84afacdf1663a139f5a25e640c5eaebd6e93d4ef931343\"" Jul 2 00:46:45.062599 env[1826]: time="2024-07-02T00:46:45.061574052Z" level=info msg="StartContainer for \"a6033f6e40c738400b84afacdf1663a139f5a25e640c5eaebd6e93d4ef931343\"" Jul 2 00:46:45.101787 systemd[1]: Started cri-containerd-a6033f6e40c738400b84afacdf1663a139f5a25e640c5eaebd6e93d4ef931343.scope. Jul 2 00:46:45.131647 sshd[4625]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:45.137459 systemd[1]: sshd@26-172.31.27.155:22-139.178.89.65:41376.service: Deactivated successfully. Jul 2 00:46:45.138824 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:46:45.139692 systemd-logind[1817]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:46:45.142037 systemd-logind[1817]: Removed session 27. Jul 2 00:46:45.162696 systemd[1]: Started sshd@27-172.31.27.155:22-139.178.89.65:41386.service. Jul 2 00:46:45.164951 systemd[1]: cri-containerd-a6033f6e40c738400b84afacdf1663a139f5a25e640c5eaebd6e93d4ef931343.scope: Deactivated successfully. Jul 2 00:46:45.205028 env[1826]: time="2024-07-02T00:46:45.204960873Z" level=info msg="shim disconnected" id=a6033f6e40c738400b84afacdf1663a139f5a25e640c5eaebd6e93d4ef931343 Jul 2 00:46:45.205525 env[1826]: time="2024-07-02T00:46:45.205486194Z" level=warning msg="cleaning up after shim disconnected" id=a6033f6e40c738400b84afacdf1663a139f5a25e640c5eaebd6e93d4ef931343 namespace=k8s.io Jul 2 00:46:45.205681 env[1826]: time="2024-07-02T00:46:45.205652341Z" level=info msg="cleaning up dead shim" Jul 2 00:46:45.225471 env[1826]: time="2024-07-02T00:46:45.225402259Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4709 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T00:46:45Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a6033f6e40c738400b84afacdf1663a139f5a25e640c5eaebd6e93d4ef931343/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 00:46:45.226278 env[1826]: time="2024-07-02T00:46:45.226091471Z" level=error msg="copy shim log" error="read /proc/self/fd/44: file already closed" Jul 2 00:46:45.226668 env[1826]: time="2024-07-02T00:46:45.226519828Z" level=error msg="Failed to pipe stderr of container \"a6033f6e40c738400b84afacdf1663a139f5a25e640c5eaebd6e93d4ef931343\"" error="reading from a closed fifo" Jul 2 00:46:45.226835 env[1826]: time="2024-07-02T00:46:45.226589995Z" level=error msg="Failed to pipe stdout of container \"a6033f6e40c738400b84afacdf1663a139f5a25e640c5eaebd6e93d4ef931343\"" error="reading from a closed fifo" Jul 2 00:46:45.229580 env[1826]: time="2024-07-02T00:46:45.229477911Z" level=error msg="StartContainer for \"a6033f6e40c738400b84afacdf1663a139f5a25e640c5eaebd6e93d4ef931343\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 00:46:45.229896 kubelet[2880]: E0702 00:46:45.229846 2880 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a6033f6e40c738400b84afacdf1663a139f5a25e640c5eaebd6e93d4ef931343" Jul 2 00:46:45.230047 kubelet[2880]: E0702 00:46:45.229997 2880 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 00:46:45.230047 kubelet[2880]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 00:46:45.230047 kubelet[2880]: rm /hostbin/cilium-mount Jul 2 00:46:45.230313 kubelet[2880]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-whm6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-v8nk9_kube-system(90b7b7f7-f2b5-4edf-938e-abd2a883a562): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 00:46:45.230313 kubelet[2880]: E0702 00:46:45.230065 2880 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-v8nk9" podUID="90b7b7f7-f2b5-4edf-938e-abd2a883a562" Jul 2 00:46:45.351850 sshd[4707]: Accepted publickey for core from 139.178.89.65 port 41386 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:45.355088 sshd[4707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:45.363413 env[1826]: time="2024-07-02T00:46:45.362801870Z" level=info msg="StopPodSandbox for \"9b8975bd64a51cce0c61a4451e241b7bca6ae2770c627729866361e4a99a827a\"" Jul 2 00:46:45.364273 env[1826]: time="2024-07-02T00:46:45.362941051Z" level=info msg="Container to stop \"a6033f6e40c738400b84afacdf1663a139f5a25e640c5eaebd6e93d4ef931343\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:46:45.380098 systemd[1]: Started session-28.scope. Jul 2 00:46:45.382672 systemd-logind[1817]: New session 28 of user core. Jul 2 00:46:45.401970 systemd[1]: cri-containerd-9b8975bd64a51cce0c61a4451e241b7bca6ae2770c627729866361e4a99a827a.scope: Deactivated successfully. Jul 2 00:46:45.460914 env[1826]: time="2024-07-02T00:46:45.460848363Z" level=info msg="shim disconnected" id=9b8975bd64a51cce0c61a4451e241b7bca6ae2770c627729866361e4a99a827a Jul 2 00:46:45.462064 env[1826]: time="2024-07-02T00:46:45.462006110Z" level=warning msg="cleaning up after shim disconnected" id=9b8975bd64a51cce0c61a4451e241b7bca6ae2770c627729866361e4a99a827a namespace=k8s.io Jul 2 00:46:45.462357 env[1826]: time="2024-07-02T00:46:45.462313730Z" level=info msg="cleaning up dead shim" Jul 2 00:46:45.476844 env[1826]: time="2024-07-02T00:46:45.476786195Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4742 runtime=io.containerd.runc.v2\n" Jul 2 00:46:45.477701 env[1826]: time="2024-07-02T00:46:45.477653038Z" level=info msg="TearDown network for sandbox \"9b8975bd64a51cce0c61a4451e241b7bca6ae2770c627729866361e4a99a827a\" successfully" Jul 2 00:46:45.477904 env[1826]: time="2024-07-02T00:46:45.477869082Z" level=info msg="StopPodSandbox for \"9b8975bd64a51cce0c61a4451e241b7bca6ae2770c627729866361e4a99a827a\" returns successfully" Jul 2 00:46:45.506572 kubelet[2880]: I0702 00:46:45.506494 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-cni-path\") pod \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " Jul 2 00:46:45.506804 kubelet[2880]: I0702 00:46:45.506628 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90b7b7f7-f2b5-4edf-938e-abd2a883a562-clustermesh-secrets\") pod \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " Jul 2 00:46:45.506804 kubelet[2880]: I0702 00:46:45.506699 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-lib-modules\") pod \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " Jul 2 00:46:45.506804 kubelet[2880]: I0702 00:46:45.506750 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whm6r\" (UniqueName: \"kubernetes.io/projected/90b7b7f7-f2b5-4edf-938e-abd2a883a562-kube-api-access-whm6r\") pod \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " Jul 2 00:46:45.506997 kubelet[2880]: I0702 00:46:45.506823 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90b7b7f7-f2b5-4edf-938e-abd2a883a562-cilium-config-path\") pod \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " Jul 2 00:46:45.506997 kubelet[2880]: I0702 00:46:45.506891 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-cilium-run\") pod \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " Jul 2 00:46:45.507512 kubelet[2880]: I0702 00:46:45.507451 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-hostproc\") pod \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " Jul 2 00:46:45.507630 kubelet[2880]: I0702 00:46:45.507552 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-host-proc-sys-kernel\") pod \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " Jul 2 00:46:45.508007 kubelet[2880]: I0702 00:46:45.507966 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90b7b7f7-f2b5-4edf-938e-abd2a883a562-hubble-tls\") pod \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " Jul 2 00:46:45.508178 kubelet[2880]: I0702 00:46:45.508143 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-bpf-maps\") pod \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " Jul 2 00:46:45.508845 kubelet[2880]: I0702 00:46:45.508695 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-host-proc-sys-net\") pod \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " Jul 2 00:46:45.509049 kubelet[2880]: I0702 00:46:45.509012 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-xtables-lock\") pod \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " Jul 2 00:46:45.509161 kubelet[2880]: I0702 00:46:45.509133 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/90b7b7f7-f2b5-4edf-938e-abd2a883a562-cilium-ipsec-secrets\") pod \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " Jul 2 00:46:45.511290 kubelet[2880]: I0702 00:46:45.509324 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-etc-cni-netd\") pod \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " Jul 2 00:46:45.511290 kubelet[2880]: I0702 00:46:45.510333 2880 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-cilium-cgroup\") pod \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\" (UID: \"90b7b7f7-f2b5-4edf-938e-abd2a883a562\") " Jul 2 00:46:45.511290 kubelet[2880]: I0702 00:46:45.510974 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "90b7b7f7-f2b5-4edf-938e-abd2a883a562" (UID: "90b7b7f7-f2b5-4edf-938e-abd2a883a562"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:45.522251 kubelet[2880]: I0702 00:46:45.522175 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-cni-path" (OuterVolumeSpecName: "cni-path") pod "90b7b7f7-f2b5-4edf-938e-abd2a883a562" (UID: "90b7b7f7-f2b5-4edf-938e-abd2a883a562"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:45.533086 kubelet[2880]: I0702 00:46:45.532983 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "90b7b7f7-f2b5-4edf-938e-abd2a883a562" (UID: "90b7b7f7-f2b5-4edf-938e-abd2a883a562"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:45.533909 kubelet[2880]: I0702 00:46:45.533830 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "90b7b7f7-f2b5-4edf-938e-abd2a883a562" (UID: "90b7b7f7-f2b5-4edf-938e-abd2a883a562"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:45.534091 kubelet[2880]: I0702 00:46:45.533934 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "90b7b7f7-f2b5-4edf-938e-abd2a883a562" (UID: "90b7b7f7-f2b5-4edf-938e-abd2a883a562"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:45.534188 kubelet[2880]: I0702 00:46:45.508770 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "90b7b7f7-f2b5-4edf-938e-abd2a883a562" (UID: "90b7b7f7-f2b5-4edf-938e-abd2a883a562"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:45.534322 kubelet[2880]: I0702 00:46:45.534273 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "90b7b7f7-f2b5-4edf-938e-abd2a883a562" (UID: "90b7b7f7-f2b5-4edf-938e-abd2a883a562"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:45.534873 kubelet[2880]: I0702 00:46:45.534813 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "90b7b7f7-f2b5-4edf-938e-abd2a883a562" (UID: "90b7b7f7-f2b5-4edf-938e-abd2a883a562"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:45.534970 kubelet[2880]: I0702 00:46:45.534910 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "90b7b7f7-f2b5-4edf-938e-abd2a883a562" (UID: "90b7b7f7-f2b5-4edf-938e-abd2a883a562"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:45.536131 kubelet[2880]: I0702 00:46:45.536078 2880 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-cilium-run\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:45.536295 kubelet[2880]: I0702 00:46:45.536136 2880 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-bpf-maps\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:45.536295 kubelet[2880]: I0702 00:46:45.536168 2880 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-host-proc-sys-net\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:45.536295 kubelet[2880]: I0702 00:46:45.536202 2880 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-host-proc-sys-kernel\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:45.536295 kubelet[2880]: I0702 00:46:45.536257 2880 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-xtables-lock\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:45.536295 kubelet[2880]: I0702 00:46:45.536285 2880 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-etc-cni-netd\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:45.536677 kubelet[2880]: I0702 00:46:45.536325 2880 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-cilium-cgroup\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:45.537322 kubelet[2880]: I0702 00:46:45.537271 2880 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-cni-path\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:45.537456 kubelet[2880]: I0702 00:46:45.537384 2880 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-lib-modules\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:45.537753 kubelet[2880]: I0702 00:46:45.537710 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90b7b7f7-f2b5-4edf-938e-abd2a883a562-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "90b7b7f7-f2b5-4edf-938e-abd2a883a562" (UID: "90b7b7f7-f2b5-4edf-938e-abd2a883a562"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:46:45.537980 kubelet[2880]: I0702 00:46:45.537943 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-hostproc" (OuterVolumeSpecName: "hostproc") pod "90b7b7f7-f2b5-4edf-938e-abd2a883a562" (UID: "90b7b7f7-f2b5-4edf-938e-abd2a883a562"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:45.546064 kubelet[2880]: I0702 00:46:45.545985 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90b7b7f7-f2b5-4edf-938e-abd2a883a562-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "90b7b7f7-f2b5-4edf-938e-abd2a883a562" (UID: "90b7b7f7-f2b5-4edf-938e-abd2a883a562"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:46:45.566360 kubelet[2880]: I0702 00:46:45.566305 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90b7b7f7-f2b5-4edf-938e-abd2a883a562-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "90b7b7f7-f2b5-4edf-938e-abd2a883a562" (UID: "90b7b7f7-f2b5-4edf-938e-abd2a883a562"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:46:45.572544 kubelet[2880]: I0702 00:46:45.572488 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90b7b7f7-f2b5-4edf-938e-abd2a883a562-kube-api-access-whm6r" (OuterVolumeSpecName: "kube-api-access-whm6r") pod "90b7b7f7-f2b5-4edf-938e-abd2a883a562" (UID: "90b7b7f7-f2b5-4edf-938e-abd2a883a562"). InnerVolumeSpecName "kube-api-access-whm6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:46:45.579538 kubelet[2880]: I0702 00:46:45.579485 2880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90b7b7f7-f2b5-4edf-938e-abd2a883a562-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "90b7b7f7-f2b5-4edf-938e-abd2a883a562" (UID: "90b7b7f7-f2b5-4edf-938e-abd2a883a562"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:46:45.639673 kubelet[2880]: I0702 00:46:45.639404 2880 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90b7b7f7-f2b5-4edf-938e-abd2a883a562-clustermesh-secrets\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:45.639673 kubelet[2880]: I0702 00:46:45.639488 2880 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-whm6r\" (UniqueName: \"kubernetes.io/projected/90b7b7f7-f2b5-4edf-938e-abd2a883a562-kube-api-access-whm6r\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:45.639673 kubelet[2880]: I0702 00:46:45.639520 2880 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90b7b7f7-f2b5-4edf-938e-abd2a883a562-cilium-config-path\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:45.639673 kubelet[2880]: I0702 00:46:45.639550 2880 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90b7b7f7-f2b5-4edf-938e-abd2a883a562-hostproc\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:45.639673 kubelet[2880]: I0702 00:46:45.639597 2880 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90b7b7f7-f2b5-4edf-938e-abd2a883a562-hubble-tls\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:45.639673 kubelet[2880]: I0702 00:46:45.639643 2880 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/90b7b7f7-f2b5-4edf-938e-abd2a883a562-cilium-ipsec-secrets\") on node \"ip-172-31-27-155\" DevicePath \"\"" Jul 2 00:46:45.712125 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b8975bd64a51cce0c61a4451e241b7bca6ae2770c627729866361e4a99a827a-shm.mount: Deactivated successfully. Jul 2 00:46:45.712372 systemd[1]: var-lib-kubelet-pods-90b7b7f7\x2df2b5\x2d4edf\x2d938e\x2dabd2a883a562-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwhm6r.mount: Deactivated successfully. Jul 2 00:46:45.712522 systemd[1]: var-lib-kubelet-pods-90b7b7f7\x2df2b5\x2d4edf\x2d938e\x2dabd2a883a562-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:46:45.712713 systemd[1]: var-lib-kubelet-pods-90b7b7f7\x2df2b5\x2d4edf\x2d938e\x2dabd2a883a562-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:46:45.712860 systemd[1]: var-lib-kubelet-pods-90b7b7f7\x2df2b5\x2d4edf\x2d938e\x2dabd2a883a562-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 00:46:46.116136 kubelet[2880]: E0702 00:46:46.116102 2880 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:46:46.363951 kubelet[2880]: I0702 00:46:46.363903 2880 scope.go:117] "RemoveContainer" containerID="a6033f6e40c738400b84afacdf1663a139f5a25e640c5eaebd6e93d4ef931343" Jul 2 00:46:46.365815 env[1826]: time="2024-07-02T00:46:46.365746567Z" level=info msg="RemoveContainer for \"a6033f6e40c738400b84afacdf1663a139f5a25e640c5eaebd6e93d4ef931343\"" Jul 2 00:46:46.371304 env[1826]: time="2024-07-02T00:46:46.370737857Z" level=info msg="RemoveContainer for \"a6033f6e40c738400b84afacdf1663a139f5a25e640c5eaebd6e93d4ef931343\" returns successfully" Jul 2 00:46:46.381281 systemd[1]: Removed slice kubepods-burstable-pod90b7b7f7_f2b5_4edf_938e_abd2a883a562.slice. Jul 2 00:46:46.439958 kubelet[2880]: I0702 00:46:46.439892 2880 topology_manager.go:215] "Topology Admit Handler" podUID="0638ba7d-ee4d-4f9f-93de-887e41e1b1a7" podNamespace="kube-system" podName="cilium-kgssd" Jul 2 00:46:46.440302 kubelet[2880]: E0702 00:46:46.440274 2880 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="90b7b7f7-f2b5-4edf-938e-abd2a883a562" containerName="mount-cgroup" Jul 2 00:46:46.440552 kubelet[2880]: I0702 00:46:46.440516 2880 memory_manager.go:354] "RemoveStaleState removing state" podUID="90b7b7f7-f2b5-4edf-938e-abd2a883a562" containerName="mount-cgroup" Jul 2 00:46:46.452146 systemd[1]: Created slice kubepods-burstable-pod0638ba7d_ee4d_4f9f_93de_887e41e1b1a7.slice. Jul 2 00:46:46.545442 kubelet[2880]: I0702 00:46:46.545386 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0638ba7d-ee4d-4f9f-93de-887e41e1b1a7-xtables-lock\") pod \"cilium-kgssd\" (UID: \"0638ba7d-ee4d-4f9f-93de-887e41e1b1a7\") " pod="kube-system/cilium-kgssd" Jul 2 00:46:46.545618 kubelet[2880]: I0702 00:46:46.545467 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0638ba7d-ee4d-4f9f-93de-887e41e1b1a7-bpf-maps\") pod \"cilium-kgssd\" (UID: \"0638ba7d-ee4d-4f9f-93de-887e41e1b1a7\") " pod="kube-system/cilium-kgssd" Jul 2 00:46:46.545618 kubelet[2880]: I0702 00:46:46.545515 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0638ba7d-ee4d-4f9f-93de-887e41e1b1a7-etc-cni-netd\") pod \"cilium-kgssd\" (UID: \"0638ba7d-ee4d-4f9f-93de-887e41e1b1a7\") " pod="kube-system/cilium-kgssd" Jul 2 00:46:46.545618 kubelet[2880]: I0702 00:46:46.545563 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0638ba7d-ee4d-4f9f-93de-887e41e1b1a7-cilium-run\") pod \"cilium-kgssd\" (UID: \"0638ba7d-ee4d-4f9f-93de-887e41e1b1a7\") " pod="kube-system/cilium-kgssd" Jul 2 00:46:46.545618 kubelet[2880]: I0702 00:46:46.545610 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0638ba7d-ee4d-4f9f-93de-887e41e1b1a7-host-proc-sys-kernel\") pod \"cilium-kgssd\" (UID: \"0638ba7d-ee4d-4f9f-93de-887e41e1b1a7\") " pod="kube-system/cilium-kgssd" Jul 2 00:46:46.545890 kubelet[2880]: I0702 00:46:46.545657 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0638ba7d-ee4d-4f9f-93de-887e41e1b1a7-lib-modules\") pod \"cilium-kgssd\" (UID: \"0638ba7d-ee4d-4f9f-93de-887e41e1b1a7\") " pod="kube-system/cilium-kgssd" Jul 2 00:46:46.545890 kubelet[2880]: I0702 00:46:46.545703 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0638ba7d-ee4d-4f9f-93de-887e41e1b1a7-clustermesh-secrets\") pod \"cilium-kgssd\" (UID: \"0638ba7d-ee4d-4f9f-93de-887e41e1b1a7\") " pod="kube-system/cilium-kgssd" Jul 2 00:46:46.545890 kubelet[2880]: I0702 00:46:46.545751 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0638ba7d-ee4d-4f9f-93de-887e41e1b1a7-host-proc-sys-net\") pod \"cilium-kgssd\" (UID: \"0638ba7d-ee4d-4f9f-93de-887e41e1b1a7\") " pod="kube-system/cilium-kgssd" Jul 2 00:46:46.545890 kubelet[2880]: I0702 00:46:46.545798 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7cbn\" (UniqueName: \"kubernetes.io/projected/0638ba7d-ee4d-4f9f-93de-887e41e1b1a7-kube-api-access-q7cbn\") pod \"cilium-kgssd\" (UID: \"0638ba7d-ee4d-4f9f-93de-887e41e1b1a7\") " pod="kube-system/cilium-kgssd" Jul 2 00:46:46.545890 kubelet[2880]: I0702 00:46:46.545843 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0638ba7d-ee4d-4f9f-93de-887e41e1b1a7-cilium-ipsec-secrets\") pod \"cilium-kgssd\" (UID: \"0638ba7d-ee4d-4f9f-93de-887e41e1b1a7\") " pod="kube-system/cilium-kgssd" Jul 2 00:46:46.545890 kubelet[2880]: I0702 00:46:46.545889 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0638ba7d-ee4d-4f9f-93de-887e41e1b1a7-hubble-tls\") pod \"cilium-kgssd\" (UID: \"0638ba7d-ee4d-4f9f-93de-887e41e1b1a7\") " pod="kube-system/cilium-kgssd" Jul 2 00:46:46.546270 kubelet[2880]: I0702 00:46:46.545935 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0638ba7d-ee4d-4f9f-93de-887e41e1b1a7-cilium-cgroup\") pod \"cilium-kgssd\" (UID: \"0638ba7d-ee4d-4f9f-93de-887e41e1b1a7\") " pod="kube-system/cilium-kgssd" Jul 2 00:46:46.546270 kubelet[2880]: I0702 00:46:46.545996 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0638ba7d-ee4d-4f9f-93de-887e41e1b1a7-cilium-config-path\") pod \"cilium-kgssd\" (UID: \"0638ba7d-ee4d-4f9f-93de-887e41e1b1a7\") " pod="kube-system/cilium-kgssd" Jul 2 00:46:46.546270 kubelet[2880]: I0702 00:46:46.546048 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0638ba7d-ee4d-4f9f-93de-887e41e1b1a7-cni-path\") pod \"cilium-kgssd\" (UID: \"0638ba7d-ee4d-4f9f-93de-887e41e1b1a7\") " pod="kube-system/cilium-kgssd" Jul 2 00:46:46.546270 kubelet[2880]: I0702 00:46:46.546091 2880 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0638ba7d-ee4d-4f9f-93de-887e41e1b1a7-hostproc\") pod \"cilium-kgssd\" (UID: \"0638ba7d-ee4d-4f9f-93de-887e41e1b1a7\") " pod="kube-system/cilium-kgssd" Jul 2 00:46:46.758174 env[1826]: time="2024-07-02T00:46:46.758012802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kgssd,Uid:0638ba7d-ee4d-4f9f-93de-887e41e1b1a7,Namespace:kube-system,Attempt:0,}" Jul 2 00:46:46.780008 kubelet[2880]: E0702 00:46:46.779956 2880 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-nsvgx" podUID="d2c99726-a471-4d80-95b5-fa84d890b0cc" Jul 2 00:46:46.790403 kubelet[2880]: I0702 00:46:46.790305 2880 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="90b7b7f7-f2b5-4edf-938e-abd2a883a562" path="/var/lib/kubelet/pods/90b7b7f7-f2b5-4edf-938e-abd2a883a562/volumes" Jul 2 00:46:46.798289 env[1826]: time="2024-07-02T00:46:46.797958324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:46:46.798289 env[1826]: time="2024-07-02T00:46:46.798025731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:46:46.798289 env[1826]: time="2024-07-02T00:46:46.798051244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:46:46.798992 env[1826]: time="2024-07-02T00:46:46.798832427Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/81ee8d2d0e36bc925992047d9b142fe135f16ba1dee188ce6029e553f52da65b pid=4776 runtime=io.containerd.runc.v2 Jul 2 00:46:46.830862 systemd[1]: Started cri-containerd-81ee8d2d0e36bc925992047d9b142fe135f16ba1dee188ce6029e553f52da65b.scope. Jul 2 00:46:46.841482 systemd[1]: run-containerd-runc-k8s.io-81ee8d2d0e36bc925992047d9b142fe135f16ba1dee188ce6029e553f52da65b-runc.fxmWzF.mount: Deactivated successfully. Jul 2 00:46:46.895481 env[1826]: time="2024-07-02T00:46:46.895423514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kgssd,Uid:0638ba7d-ee4d-4f9f-93de-887e41e1b1a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"81ee8d2d0e36bc925992047d9b142fe135f16ba1dee188ce6029e553f52da65b\"" Jul 2 00:46:46.904623 env[1826]: time="2024-07-02T00:46:46.904567584Z" level=info msg="CreateContainer within sandbox \"81ee8d2d0e36bc925992047d9b142fe135f16ba1dee188ce6029e553f52da65b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:46:46.936570 env[1826]: time="2024-07-02T00:46:46.936467921Z" level=info msg="CreateContainer within sandbox \"81ee8d2d0e36bc925992047d9b142fe135f16ba1dee188ce6029e553f52da65b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a40480ea3d9cb1e05cb99526963d8a3a54f0dbaa48ac139fd1f8cc75a5489910\"" Jul 2 00:46:46.940263 env[1826]: time="2024-07-02T00:46:46.938920532Z" level=info msg="StartContainer for \"a40480ea3d9cb1e05cb99526963d8a3a54f0dbaa48ac139fd1f8cc75a5489910\"" Jul 2 00:46:46.995765 systemd[1]: Started cri-containerd-a40480ea3d9cb1e05cb99526963d8a3a54f0dbaa48ac139fd1f8cc75a5489910.scope. Jul 2 00:46:47.160486 env[1826]: time="2024-07-02T00:46:47.160420503Z" level=info msg="StartContainer for \"a40480ea3d9cb1e05cb99526963d8a3a54f0dbaa48ac139fd1f8cc75a5489910\" returns successfully" Jul 2 00:46:47.187357 systemd[1]: cri-containerd-a40480ea3d9cb1e05cb99526963d8a3a54f0dbaa48ac139fd1f8cc75a5489910.scope: Deactivated successfully. Jul 2 00:46:47.261022 env[1826]: time="2024-07-02T00:46:47.260956342Z" level=info msg="shim disconnected" id=a40480ea3d9cb1e05cb99526963d8a3a54f0dbaa48ac139fd1f8cc75a5489910 Jul 2 00:46:47.261462 env[1826]: time="2024-07-02T00:46:47.261414364Z" level=warning msg="cleaning up after shim disconnected" id=a40480ea3d9cb1e05cb99526963d8a3a54f0dbaa48ac139fd1f8cc75a5489910 namespace=k8s.io Jul 2 00:46:47.261597 env[1826]: time="2024-07-02T00:46:47.261569051Z" level=info msg="cleaning up dead shim" Jul 2 00:46:47.275832 env[1826]: time="2024-07-02T00:46:47.275775393Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4858 runtime=io.containerd.runc.v2\n" Jul 2 00:46:47.376259 env[1826]: time="2024-07-02T00:46:47.375078890Z" level=info msg="CreateContainer within sandbox \"81ee8d2d0e36bc925992047d9b142fe135f16ba1dee188ce6029e553f52da65b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:46:47.401756 env[1826]: time="2024-07-02T00:46:47.401692762Z" level=info msg="CreateContainer within sandbox \"81ee8d2d0e36bc925992047d9b142fe135f16ba1dee188ce6029e553f52da65b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"50f9b4b98e7c8e345fa890e5be1ade1b958e5758ca4aceb79d3a3d479e4521a5\"" Jul 2 00:46:47.405924 env[1826]: time="2024-07-02T00:46:47.405863986Z" level=info msg="StartContainer for \"50f9b4b98e7c8e345fa890e5be1ade1b958e5758ca4aceb79d3a3d479e4521a5\"" Jul 2 00:46:47.435864 systemd[1]: Started cri-containerd-50f9b4b98e7c8e345fa890e5be1ade1b958e5758ca4aceb79d3a3d479e4521a5.scope. Jul 2 00:46:47.508780 env[1826]: time="2024-07-02T00:46:47.508679197Z" level=info msg="StartContainer for \"50f9b4b98e7c8e345fa890e5be1ade1b958e5758ca4aceb79d3a3d479e4521a5\" returns successfully" Jul 2 00:46:47.524126 systemd[1]: cri-containerd-50f9b4b98e7c8e345fa890e5be1ade1b958e5758ca4aceb79d3a3d479e4521a5.scope: Deactivated successfully. Jul 2 00:46:47.575545 env[1826]: time="2024-07-02T00:46:47.575480773Z" level=info msg="shim disconnected" id=50f9b4b98e7c8e345fa890e5be1ade1b958e5758ca4aceb79d3a3d479e4521a5 Jul 2 00:46:47.575903 env[1826]: time="2024-07-02T00:46:47.575868653Z" level=warning msg="cleaning up after shim disconnected" id=50f9b4b98e7c8e345fa890e5be1ade1b958e5758ca4aceb79d3a3d479e4521a5 namespace=k8s.io Jul 2 00:46:47.576053 env[1826]: time="2024-07-02T00:46:47.576025595Z" level=info msg="cleaning up dead shim" Jul 2 00:46:47.590422 env[1826]: time="2024-07-02T00:46:47.590363426Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4917 runtime=io.containerd.runc.v2\n" Jul 2 00:46:47.773015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount522078167.mount: Deactivated successfully. Jul 2 00:46:47.775685 kubelet[2880]: E0702 00:46:47.775595 2880 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-5gbxw" podUID="b281c72c-790a-4fd6-8dd4-323e5d1efb6a" Jul 2 00:46:48.327027 kubelet[2880]: W0702 00:46:48.326940 2880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90b7b7f7_f2b5_4edf_938e_abd2a883a562.slice/cri-containerd-a6033f6e40c738400b84afacdf1663a139f5a25e640c5eaebd6e93d4ef931343.scope WatchSource:0}: container "a6033f6e40c738400b84afacdf1663a139f5a25e640c5eaebd6e93d4ef931343" in namespace "k8s.io": not found Jul 2 00:46:48.384796 env[1826]: time="2024-07-02T00:46:48.384541352Z" level=info msg="CreateContainer within sandbox \"81ee8d2d0e36bc925992047d9b142fe135f16ba1dee188ce6029e553f52da65b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:46:48.433854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3896672305.mount: Deactivated successfully. Jul 2 00:46:48.449269 env[1826]: time="2024-07-02T00:46:48.449142688Z" level=info msg="CreateContainer within sandbox \"81ee8d2d0e36bc925992047d9b142fe135f16ba1dee188ce6029e553f52da65b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7966c5473cbfed96e2427b034aa41f84afa2324401cf49de1f74010ff464c434\"" Jul 2 00:46:48.450662 env[1826]: time="2024-07-02T00:46:48.450608007Z" level=info msg="StartContainer for \"7966c5473cbfed96e2427b034aa41f84afa2324401cf49de1f74010ff464c434\"" Jul 2 00:46:48.496014 systemd[1]: Started cri-containerd-7966c5473cbfed96e2427b034aa41f84afa2324401cf49de1f74010ff464c434.scope. Jul 2 00:46:48.559517 env[1826]: time="2024-07-02T00:46:48.559447366Z" level=info msg="StartContainer for \"7966c5473cbfed96e2427b034aa41f84afa2324401cf49de1f74010ff464c434\" returns successfully" Jul 2 00:46:48.565735 systemd[1]: cri-containerd-7966c5473cbfed96e2427b034aa41f84afa2324401cf49de1f74010ff464c434.scope: Deactivated successfully. Jul 2 00:46:48.631277 env[1826]: time="2024-07-02T00:46:48.631063141Z" level=info msg="shim disconnected" id=7966c5473cbfed96e2427b034aa41f84afa2324401cf49de1f74010ff464c434 Jul 2 00:46:48.631870 env[1826]: time="2024-07-02T00:46:48.631780206Z" level=warning msg="cleaning up after shim disconnected" id=7966c5473cbfed96e2427b034aa41f84afa2324401cf49de1f74010ff464c434 namespace=k8s.io Jul 2 00:46:48.632074 env[1826]: time="2024-07-02T00:46:48.632029588Z" level=info msg="cleaning up dead shim" Jul 2 00:46:48.646859 env[1826]: time="2024-07-02T00:46:48.646798861Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4974 runtime=io.containerd.runc.v2\n" Jul 2 00:46:48.773063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7966c5473cbfed96e2427b034aa41f84afa2324401cf49de1f74010ff464c434-rootfs.mount: Deactivated successfully. Jul 2 00:46:48.777249 kubelet[2880]: E0702 00:46:48.775004 2880 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-nsvgx" podUID="d2c99726-a471-4d80-95b5-fa84d890b0cc" Jul 2 00:46:49.387966 env[1826]: time="2024-07-02T00:46:49.387900062Z" level=info msg="CreateContainer within sandbox \"81ee8d2d0e36bc925992047d9b142fe135f16ba1dee188ce6029e553f52da65b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:46:49.419601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4272882251.mount: Deactivated successfully. Jul 2 00:46:49.432055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2571223238.mount: Deactivated successfully. Jul 2 00:46:49.442757 env[1826]: time="2024-07-02T00:46:49.442690257Z" level=info msg="CreateContainer within sandbox \"81ee8d2d0e36bc925992047d9b142fe135f16ba1dee188ce6029e553f52da65b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6f0fe219203c488e5f669855d85c1b2a9c81c440e267ca29e7eb1c090d316f96\"" Jul 2 00:46:49.444398 env[1826]: time="2024-07-02T00:46:49.444322827Z" level=info msg="StartContainer for \"6f0fe219203c488e5f669855d85c1b2a9c81c440e267ca29e7eb1c090d316f96\"" Jul 2 00:46:49.476129 systemd[1]: Started cri-containerd-6f0fe219203c488e5f669855d85c1b2a9c81c440e267ca29e7eb1c090d316f96.scope. Jul 2 00:46:49.538449 systemd[1]: cri-containerd-6f0fe219203c488e5f669855d85c1b2a9c81c440e267ca29e7eb1c090d316f96.scope: Deactivated successfully. Jul 2 00:46:49.541742 env[1826]: time="2024-07-02T00:46:49.541079619Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0638ba7d_ee4d_4f9f_93de_887e41e1b1a7.slice/cri-containerd-6f0fe219203c488e5f669855d85c1b2a9c81c440e267ca29e7eb1c090d316f96.scope/memory.events\": no such file or directory" Jul 2 00:46:49.545303 env[1826]: time="2024-07-02T00:46:49.545192149Z" level=info msg="StartContainer for \"6f0fe219203c488e5f669855d85c1b2a9c81c440e267ca29e7eb1c090d316f96\" returns successfully" Jul 2 00:46:49.617008 env[1826]: time="2024-07-02T00:46:49.616933497Z" level=info msg="shim disconnected" id=6f0fe219203c488e5f669855d85c1b2a9c81c440e267ca29e7eb1c090d316f96 Jul 2 00:46:49.617008 env[1826]: time="2024-07-02T00:46:49.617001924Z" level=warning msg="cleaning up after shim disconnected" id=6f0fe219203c488e5f669855d85c1b2a9c81c440e267ca29e7eb1c090d316f96 namespace=k8s.io Jul 2 00:46:49.617430 env[1826]: time="2024-07-02T00:46:49.617029513Z" level=info msg="cleaning up dead shim" Jul 2 00:46:49.632604 env[1826]: time="2024-07-02T00:46:49.632531008Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5031 runtime=io.containerd.runc.v2\n" Jul 2 00:46:49.775122 kubelet[2880]: E0702 00:46:49.774968 2880 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-5gbxw" podUID="b281c72c-790a-4fd6-8dd4-323e5d1efb6a" Jul 2 00:46:50.396528 env[1826]: time="2024-07-02T00:46:50.396444368Z" level=info msg="CreateContainer within sandbox \"81ee8d2d0e36bc925992047d9b142fe135f16ba1dee188ce6029e553f52da65b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:46:50.435747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2851464711.mount: Deactivated successfully. Jul 2 00:46:50.446900 env[1826]: time="2024-07-02T00:46:50.446820353Z" level=info msg="CreateContainer within sandbox \"81ee8d2d0e36bc925992047d9b142fe135f16ba1dee188ce6029e553f52da65b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"26ec0ea03205ad3311499a6cb03422e539445b9d1be201d762cff1d3b05c35cf\"" Jul 2 00:46:50.449273 env[1826]: time="2024-07-02T00:46:50.448557015Z" level=info msg="StartContainer for \"26ec0ea03205ad3311499a6cb03422e539445b9d1be201d762cff1d3b05c35cf\"" Jul 2 00:46:50.485098 systemd[1]: Started cri-containerd-26ec0ea03205ad3311499a6cb03422e539445b9d1be201d762cff1d3b05c35cf.scope. Jul 2 00:46:50.550612 env[1826]: time="2024-07-02T00:46:50.550528582Z" level=info msg="StartContainer for \"26ec0ea03205ad3311499a6cb03422e539445b9d1be201d762cff1d3b05c35cf\" returns successfully" Jul 2 00:46:50.780949 kubelet[2880]: E0702 00:46:50.778550 2880 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-nsvgx" podUID="d2c99726-a471-4d80-95b5-fa84d890b0cc" Jul 2 00:46:51.283261 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 2 00:46:51.445976 kubelet[2880]: W0702 00:46:51.445926 2880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0638ba7d_ee4d_4f9f_93de_887e41e1b1a7.slice/cri-containerd-a40480ea3d9cb1e05cb99526963d8a3a54f0dbaa48ac139fd1f8cc75a5489910.scope WatchSource:0}: task a40480ea3d9cb1e05cb99526963d8a3a54f0dbaa48ac139fd1f8cc75a5489910 not found: not found Jul 2 00:46:51.920422 systemd[1]: run-containerd-runc-k8s.io-26ec0ea03205ad3311499a6cb03422e539445b9d1be201d762cff1d3b05c35cf-runc.OzaxZo.mount: Deactivated successfully. Jul 2 00:46:54.151309 systemd[1]: run-containerd-runc-k8s.io-26ec0ea03205ad3311499a6cb03422e539445b9d1be201d762cff1d3b05c35cf-runc.ujYF35.mount: Deactivated successfully. Jul 2 00:46:54.554884 kubelet[2880]: W0702 00:46:54.554788 2880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0638ba7d_ee4d_4f9f_93de_887e41e1b1a7.slice/cri-containerd-50f9b4b98e7c8e345fa890e5be1ade1b958e5758ca4aceb79d3a3d479e4521a5.scope WatchSource:0}: task 50f9b4b98e7c8e345fa890e5be1ade1b958e5758ca4aceb79d3a3d479e4521a5 not found: not found Jul 2 00:46:55.396754 systemd-networkd[1540]: lxc_health: Link UP Jul 2 00:46:55.408446 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 00:46:55.407096 systemd-networkd[1540]: lxc_health: Gained carrier Jul 2 00:46:55.411575 (udev-worker)[5600]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:46:56.462415 systemd[1]: run-containerd-runc-k8s.io-26ec0ea03205ad3311499a6cb03422e539445b9d1be201d762cff1d3b05c35cf-runc.vf14Ef.mount: Deactivated successfully. Jul 2 00:46:56.797971 kubelet[2880]: I0702 00:46:56.797914 2880 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-kgssd" podStartSLOduration=10.797852317 podStartE2EDuration="10.797852317s" podCreationTimestamp="2024-07-02 00:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:46:51.426376245 +0000 UTC m=+131.006415482" watchObservedRunningTime="2024-07-02 00:46:56.797852317 +0000 UTC m=+136.377891542" Jul 2 00:46:57.029601 systemd-networkd[1540]: lxc_health: Gained IPv6LL Jul 2 00:46:57.669607 kubelet[2880]: W0702 00:46:57.665041 2880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0638ba7d_ee4d_4f9f_93de_887e41e1b1a7.slice/cri-containerd-7966c5473cbfed96e2427b034aa41f84afa2324401cf49de1f74010ff464c434.scope WatchSource:0}: task 7966c5473cbfed96e2427b034aa41f84afa2324401cf49de1f74010ff464c434 not found: not found Jul 2 00:46:58.817767 systemd[1]: run-containerd-runc-k8s.io-26ec0ea03205ad3311499a6cb03422e539445b9d1be201d762cff1d3b05c35cf-runc.TPZOms.mount: Deactivated successfully. Jul 2 00:47:00.789125 kubelet[2880]: W0702 00:47:00.789046 2880 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0638ba7d_ee4d_4f9f_93de_887e41e1b1a7.slice/cri-containerd-6f0fe219203c488e5f669855d85c1b2a9c81c440e267ca29e7eb1c090d316f96.scope WatchSource:0}: task 6f0fe219203c488e5f669855d85c1b2a9c81c440e267ca29e7eb1c090d316f96 not found: not found Jul 2 00:47:01.111755 systemd[1]: run-containerd-runc-k8s.io-26ec0ea03205ad3311499a6cb03422e539445b9d1be201d762cff1d3b05c35cf-runc.ZoVPiX.mount: Deactivated successfully. Jul 2 00:47:01.270970 sshd[4707]: pam_unix(sshd:session): session closed for user core Jul 2 00:47:01.276470 systemd[1]: sshd@27-172.31.27.155:22-139.178.89.65:41386.service: Deactivated successfully. Jul 2 00:47:01.277795 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 00:47:01.281185 systemd-logind[1817]: Session 28 logged out. Waiting for processes to exit. Jul 2 00:47:01.285048 systemd-logind[1817]: Removed session 28. Jul 2 00:47:11.457453 update_engine[1819]: I0702 00:47:11.457056 1819 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 2 00:47:11.457453 update_engine[1819]: I0702 00:47:11.457111 1819 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 2 00:47:11.458079 update_engine[1819]: I0702 00:47:11.457502 1819 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 2 00:47:11.458843 update_engine[1819]: I0702 00:47:11.458357 1819 omaha_request_params.cc:62] Current group set to lts Jul 2 00:47:11.458843 update_engine[1819]: I0702 00:47:11.458549 1819 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 2 00:47:11.458843 update_engine[1819]: I0702 00:47:11.458563 1819 update_attempter.cc:643] Scheduling an action processor start. Jul 2 00:47:11.458843 update_engine[1819]: I0702 00:47:11.458591 1819 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 00:47:11.458843 update_engine[1819]: I0702 00:47:11.458642 1819 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 2 00:47:11.459674 update_engine[1819]: I0702 00:47:11.459623 1819 omaha_request_action.cc:270] Posting an Omaha request to disabled Jul 2 00:47:11.459674 update_engine[1819]: I0702 00:47:11.459662 1819 omaha_request_action.cc:271] Request: Jul 2 00:47:11.459674 update_engine[1819]: Jul 2 00:47:11.459674 update_engine[1819]: Jul 2 00:47:11.459674 update_engine[1819]: Jul 2 00:47:11.459674 update_engine[1819]: Jul 2 00:47:11.459674 update_engine[1819]: Jul 2 00:47:11.459674 update_engine[1819]: Jul 2 00:47:11.459674 update_engine[1819]: Jul 2 00:47:11.459674 update_engine[1819]: Jul 2 00:47:11.460329 update_engine[1819]: I0702 00:47:11.459682 1819 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:47:11.460398 locksmithd[1887]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 2 00:47:11.466371 update_engine[1819]: I0702 00:47:11.466310 1819 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:47:11.466722 update_engine[1819]: I0702 00:47:11.466672 1819 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:47:11.478414 update_engine[1819]: E0702 00:47:11.478351 1819 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:47:11.478590 update_engine[1819]: I0702 00:47:11.478513 1819 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 2 00:47:15.860330 systemd[1]: cri-containerd-4103a390fc8690a2c702f9d0c34f1c2acd9c121d77cc1d3379c42616602d0f7d.scope: Deactivated successfully. Jul 2 00:47:15.860904 systemd[1]: cri-containerd-4103a390fc8690a2c702f9d0c34f1c2acd9c121d77cc1d3379c42616602d0f7d.scope: Consumed 5.612s CPU time. Jul 2 00:47:15.902611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4103a390fc8690a2c702f9d0c34f1c2acd9c121d77cc1d3379c42616602d0f7d-rootfs.mount: Deactivated successfully. Jul 2 00:47:15.927956 env[1826]: time="2024-07-02T00:47:15.927879051Z" level=info msg="shim disconnected" id=4103a390fc8690a2c702f9d0c34f1c2acd9c121d77cc1d3379c42616602d0f7d Jul 2 00:47:15.927956 env[1826]: time="2024-07-02T00:47:15.927952062Z" level=warning msg="cleaning up after shim disconnected" id=4103a390fc8690a2c702f9d0c34f1c2acd9c121d77cc1d3379c42616602d0f7d namespace=k8s.io Jul 2 00:47:15.928853 env[1826]: time="2024-07-02T00:47:15.927975655Z" level=info msg="cleaning up dead shim" Jul 2 00:47:15.941853 env[1826]: time="2024-07-02T00:47:15.941781783Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:47:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5718 runtime=io.containerd.runc.v2\n" Jul 2 00:47:16.479506 kubelet[2880]: I0702 00:47:16.477834 2880 scope.go:117] "RemoveContainer" containerID="4103a390fc8690a2c702f9d0c34f1c2acd9c121d77cc1d3379c42616602d0f7d" Jul 2 00:47:16.484412 env[1826]: time="2024-07-02T00:47:16.484356584Z" level=info msg="CreateContainer within sandbox \"776a083d19b363e48b6e98d5517d171743b4cfe5f26cbe7c2b467371dd1060df\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 00:47:16.511716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1582136142.mount: Deactivated successfully. Jul 2 00:47:16.525003 env[1826]: time="2024-07-02T00:47:16.524941089Z" level=info msg="CreateContainer within sandbox \"776a083d19b363e48b6e98d5517d171743b4cfe5f26cbe7c2b467371dd1060df\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f074d4c11c4e6585d7064c23c54a05677f8ade73884745d27e9de11f76f690de\"" Jul 2 00:47:16.526085 env[1826]: time="2024-07-02T00:47:16.526021661Z" level=info msg="StartContainer for \"f074d4c11c4e6585d7064c23c54a05677f8ade73884745d27e9de11f76f690de\"" Jul 2 00:47:16.562728 systemd[1]: Started cri-containerd-f074d4c11c4e6585d7064c23c54a05677f8ade73884745d27e9de11f76f690de.scope. Jul 2 00:47:16.654153 env[1826]: time="2024-07-02T00:47:16.654089580Z" level=info msg="StartContainer for \"f074d4c11c4e6585d7064c23c54a05677f8ade73884745d27e9de11f76f690de\" returns successfully" Jul 2 00:47:20.670143 systemd[1]: cri-containerd-d510704d6b5bfa8fc2e7c3568e59161e536c781be55be64fef3438b254804005.scope: Deactivated successfully. Jul 2 00:47:20.670733 systemd[1]: cri-containerd-d510704d6b5bfa8fc2e7c3568e59161e536c781be55be64fef3438b254804005.scope: Consumed 2.850s CPU time. Jul 2 00:47:20.709826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d510704d6b5bfa8fc2e7c3568e59161e536c781be55be64fef3438b254804005-rootfs.mount: Deactivated successfully. Jul 2 00:47:20.726370 env[1826]: time="2024-07-02T00:47:20.726308834Z" level=info msg="shim disconnected" id=d510704d6b5bfa8fc2e7c3568e59161e536c781be55be64fef3438b254804005 Jul 2 00:47:20.727262 env[1826]: time="2024-07-02T00:47:20.727168501Z" level=warning msg="cleaning up after shim disconnected" id=d510704d6b5bfa8fc2e7c3568e59161e536c781be55be64fef3438b254804005 namespace=k8s.io Jul 2 00:47:20.727416 env[1826]: time="2024-07-02T00:47:20.727386190Z" level=info msg="cleaning up dead shim" Jul 2 00:47:20.741415 env[1826]: time="2024-07-02T00:47:20.741359580Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:47:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5779 runtime=io.containerd.runc.v2\n" Jul 2 00:47:21.448181 update_engine[1819]: I0702 00:47:21.447495 1819 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:47:21.448181 update_engine[1819]: I0702 00:47:21.447844 1819 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:47:21.448181 update_engine[1819]: I0702 00:47:21.448109 1819 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:47:21.449501 update_engine[1819]: E0702 00:47:21.449326 1819 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:47:21.449501 update_engine[1819]: I0702 00:47:21.449465 1819 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 2 00:47:21.497685 kubelet[2880]: I0702 00:47:21.497648 2880 scope.go:117] "RemoveContainer" containerID="d510704d6b5bfa8fc2e7c3568e59161e536c781be55be64fef3438b254804005" Jul 2 00:47:21.501786 env[1826]: time="2024-07-02T00:47:21.501703953Z" level=info msg="CreateContainer within sandbox \"fd0698edddd90b145effd3d9bf2e753319ba1e3fb93992872832e31067c1bb59\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 00:47:21.530894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2033077004.mount: Deactivated successfully. Jul 2 00:47:21.543320 env[1826]: time="2024-07-02T00:47:21.543200676Z" level=info msg="CreateContainer within sandbox \"fd0698edddd90b145effd3d9bf2e753319ba1e3fb93992872832e31067c1bb59\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ed9303236e4274f56a1b34e11c8b73259a169c438b7659b7faddec43ae375b9d\"" Jul 2 00:47:21.544532 env[1826]: time="2024-07-02T00:47:21.544474755Z" level=info msg="StartContainer for \"ed9303236e4274f56a1b34e11c8b73259a169c438b7659b7faddec43ae375b9d\"" Jul 2 00:47:21.584729 systemd[1]: Started cri-containerd-ed9303236e4274f56a1b34e11c8b73259a169c438b7659b7faddec43ae375b9d.scope. Jul 2 00:47:21.663152 env[1826]: time="2024-07-02T00:47:21.663086461Z" level=info msg="StartContainer for \"ed9303236e4274f56a1b34e11c8b73259a169c438b7659b7faddec43ae375b9d\" returns successfully" Jul 2 00:47:23.547328 kubelet[2880]: E0702 00:47:23.547289 2880 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-155?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:47:31.455282 update_engine[1819]: I0702 00:47:31.455149 1819 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:47:31.455896 update_engine[1819]: I0702 00:47:31.455552 1819 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:47:31.455896 update_engine[1819]: I0702 00:47:31.455825 1819 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:47:31.456345 update_engine[1819]: E0702 00:47:31.456301 1819 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:47:31.456467 update_engine[1819]: I0702 00:47:31.456439 1819 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 2 00:47:33.549190 kubelet[2880]: E0702 00:47:33.549131 2880 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-155?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"