Mar 17 18:48:41.021583 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 18:48:41.021601 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Mar 17 17:11:44 -00 2025 Mar 17 18:48:41.021609 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 17 18:48:41.021616 kernel: printk: bootconsole [pl11] enabled Mar 17 18:48:41.021621 kernel: efi: EFI v2.70 by EDK II Mar 17 18:48:41.021626 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3763cf98 Mar 17 18:48:41.021633 kernel: random: crng init done Mar 17 18:48:41.021638 kernel: ACPI: Early table checksum verification disabled Mar 17 18:48:41.021643 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 17 18:48:41.021649 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:41.021654 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:41.021659 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 17 18:48:41.021666 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:41.021672 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:41.021678 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:41.021684 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:41.021690 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:41.021697 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:41.021702 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 17 18:48:41.021708 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:41.021714 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 17 18:48:41.021719 kernel: NUMA: Failed to initialise from firmware Mar 17 18:48:41.021725 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Mar 17 18:48:41.021731 kernel: NUMA: NODE_DATA [mem 0x1bf7f1900-0x1bf7f6fff] Mar 17 18:48:41.021736 kernel: Zone ranges: Mar 17 18:48:41.021742 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 17 18:48:41.021748 kernel: DMA32 empty Mar 17 18:48:41.021753 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 17 18:48:41.021760 kernel: Movable zone start for each node Mar 17 18:48:41.021766 kernel: Early memory node ranges Mar 17 18:48:41.021772 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 17 18:48:41.021777 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 17 18:48:41.021783 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 17 18:48:41.021789 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 17 18:48:41.021794 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 17 18:48:41.021800 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 17 18:48:41.021805 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 17 18:48:41.021811 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 17 18:48:41.021817 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 17 18:48:41.021823 kernel: psci: probing for conduit method from ACPI. Mar 17 18:48:41.021831 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 18:48:41.021837 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 18:48:41.021843 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 17 18:48:41.021849 kernel: psci: SMC Calling Convention v1.4 Mar 17 18:48:41.021856 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Mar 17 18:48:41.021863 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Mar 17 18:48:41.021869 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Mar 17 18:48:41.021875 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Mar 17 18:48:41.021881 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 18:48:41.021887 kernel: Detected PIPT I-cache on CPU0 Mar 17 18:48:41.021893 kernel: CPU features: detected: GIC system register CPU interface Mar 17 18:48:41.021899 kernel: CPU features: detected: Hardware dirty bit management Mar 17 18:48:41.021905 kernel: CPU features: detected: Spectre-BHB Mar 17 18:48:41.021911 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 18:48:41.021917 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 18:48:41.021923 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 18:48:41.021930 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 17 18:48:41.021937 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 18:48:41.021943 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 17 18:48:41.021949 kernel: Policy zone: Normal Mar 17 18:48:41.021956 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:48:41.021962 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:48:41.021968 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:48:41.021975 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:48:41.021981 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:48:41.021987 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Mar 17 18:48:41.021994 kernel: Memory: 3986936K/4194160K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 207224K reserved, 0K cma-reserved) Mar 17 18:48:41.022001 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 18:48:41.022007 kernel: trace event string verifier disabled Mar 17 18:48:41.022013 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 18:48:41.022019 kernel: rcu: RCU event tracing is enabled. Mar 17 18:48:41.022026 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 18:48:41.022032 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 18:48:41.022039 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:48:41.022045 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:48:41.022051 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 18:48:41.022057 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 18:48:41.022063 kernel: GICv3: 960 SPIs implemented Mar 17 18:48:41.022069 kernel: GICv3: 0 Extended SPIs implemented Mar 17 18:48:41.025111 kernel: GICv3: Distributor has no Range Selector support Mar 17 18:48:41.025132 kernel: Root IRQ handler: gic_handle_irq Mar 17 18:48:41.025140 kernel: GICv3: 16 PPIs implemented Mar 17 18:48:41.025146 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 17 18:48:41.025153 kernel: ITS: No ITS available, not enabling LPIs Mar 17 18:48:41.025160 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:48:41.025166 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 18:48:41.025173 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 18:48:41.025180 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 18:48:41.025186 kernel: Console: colour dummy device 80x25 Mar 17 18:48:41.025198 kernel: printk: console [tty1] enabled Mar 17 18:48:41.025205 kernel: ACPI: Core revision 20210730 Mar 17 18:48:41.025211 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 18:48:41.025218 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:48:41.025225 kernel: LSM: Security Framework initializing Mar 17 18:48:41.025232 kernel: SELinux: Initializing. Mar 17 18:48:41.025238 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:48:41.025245 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:48:41.025251 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Mar 17 18:48:41.025259 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Mar 17 18:48:41.025266 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:48:41.025272 kernel: Remapping and enabling EFI services. Mar 17 18:48:41.025278 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:48:41.025285 kernel: Detected PIPT I-cache on CPU1 Mar 17 18:48:41.025291 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 17 18:48:41.025298 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:48:41.025304 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 18:48:41.025311 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 18:48:41.025317 kernel: SMP: Total of 2 processors activated. Mar 17 18:48:41.025325 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 18:48:41.025331 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 17 18:48:41.025338 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 18:48:41.025344 kernel: CPU features: detected: CRC32 instructions Mar 17 18:48:41.025351 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 18:48:41.025357 kernel: CPU features: detected: LSE atomic instructions Mar 17 18:48:41.025364 kernel: CPU features: detected: Privileged Access Never Mar 17 18:48:41.025370 kernel: CPU: All CPU(s) started at EL1 Mar 17 18:48:41.025377 kernel: alternatives: patching kernel code Mar 17 18:48:41.025385 kernel: devtmpfs: initialized Mar 17 18:48:41.025395 kernel: KASLR enabled Mar 17 18:48:41.025402 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:48:41.025410 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 18:48:41.025417 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:48:41.025424 kernel: SMBIOS 3.1.0 present. Mar 17 18:48:41.025431 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 17 18:48:41.025437 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:48:41.025445 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 18:48:41.025453 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 18:48:41.025460 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 18:48:41.025466 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:48:41.025473 kernel: audit: type=2000 audit(0.087:1): state=initialized audit_enabled=0 res=1 Mar 17 18:48:41.025480 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:48:41.025487 kernel: cpuidle: using governor menu Mar 17 18:48:41.025493 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 18:48:41.025501 kernel: ASID allocator initialised with 32768 entries Mar 17 18:48:41.025508 kernel: ACPI: bus type PCI registered Mar 17 18:48:41.025515 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:48:41.025521 kernel: Serial: AMBA PL011 UART driver Mar 17 18:48:41.025528 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:48:41.025535 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 18:48:41.025542 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:48:41.025548 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 18:48:41.025555 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:48:41.025563 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 18:48:41.025570 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:48:41.025576 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:48:41.025583 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:48:41.025590 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:48:41.025596 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:48:41.025603 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:48:41.025610 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:48:41.025617 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:48:41.025625 kernel: ACPI: Interpreter enabled Mar 17 18:48:41.025631 kernel: ACPI: Using GIC for interrupt routing Mar 17 18:48:41.025638 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 17 18:48:41.025645 kernel: printk: console [ttyAMA0] enabled Mar 17 18:48:41.025651 kernel: printk: bootconsole [pl11] disabled Mar 17 18:48:41.025658 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 17 18:48:41.025665 kernel: iommu: Default domain type: Translated Mar 17 18:48:41.025672 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 18:48:41.025679 kernel: vgaarb: loaded Mar 17 18:48:41.025685 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:48:41.025693 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:48:41.025700 kernel: PTP clock support registered Mar 17 18:48:41.025706 kernel: Registered efivars operations Mar 17 18:48:41.025713 kernel: No ACPI PMU IRQ for CPU0 Mar 17 18:48:41.025720 kernel: No ACPI PMU IRQ for CPU1 Mar 17 18:48:41.025726 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 18:48:41.025733 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:48:41.025740 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:48:41.025748 kernel: pnp: PnP ACPI init Mar 17 18:48:41.025755 kernel: pnp: PnP ACPI: found 0 devices Mar 17 18:48:41.025762 kernel: NET: Registered PF_INET protocol family Mar 17 18:48:41.025768 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:48:41.025775 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:48:41.025782 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:48:41.025789 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:48:41.025796 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 18:48:41.025802 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:48:41.025810 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:48:41.025817 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:48:41.025824 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:48:41.025831 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:48:41.025837 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Mar 17 18:48:41.025844 kernel: kvm [1]: HYP mode not available Mar 17 18:48:41.025850 kernel: Initialise system trusted keyrings Mar 17 18:48:41.025857 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:48:41.025864 kernel: Key type asymmetric registered Mar 17 18:48:41.025871 kernel: Asymmetric key parser 'x509' registered Mar 17 18:48:41.025878 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:48:41.025885 kernel: io scheduler mq-deadline registered Mar 17 18:48:41.025891 kernel: io scheduler kyber registered Mar 17 18:48:41.025898 kernel: io scheduler bfq registered Mar 17 18:48:41.025904 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:48:41.025911 kernel: thunder_xcv, ver 1.0 Mar 17 18:48:41.025918 kernel: thunder_bgx, ver 1.0 Mar 17 18:48:41.025924 kernel: nicpf, ver 1.0 Mar 17 18:48:41.025931 kernel: nicvf, ver 1.0 Mar 17 18:48:41.026065 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 18:48:41.026152 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T18:48:40 UTC (1742237320) Mar 17 18:48:41.026163 kernel: efifb: probing for efifb Mar 17 18:48:41.026169 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 17 18:48:41.026176 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 17 18:48:41.026183 kernel: efifb: scrolling: redraw Mar 17 18:48:41.026190 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 17 18:48:41.026199 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 18:48:41.026206 kernel: fb0: EFI VGA frame buffer device Mar 17 18:48:41.026213 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 17 18:48:41.026220 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 18:48:41.026227 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:48:41.026233 kernel: Segment Routing with IPv6 Mar 17 18:48:41.026240 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:48:41.026247 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:48:41.026253 kernel: Key type dns_resolver registered Mar 17 18:48:41.026260 kernel: registered taskstats version 1 Mar 17 18:48:41.026268 kernel: Loading compiled-in X.509 certificates Mar 17 18:48:41.026275 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: c6f3fb83dc6bb7052b07ec5b1ef41d12f9b3f7e4' Mar 17 18:48:41.026281 kernel: Key type .fscrypt registered Mar 17 18:48:41.026288 kernel: Key type fscrypt-provisioning registered Mar 17 18:48:41.026295 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:48:41.026302 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:48:41.026308 kernel: ima: No architecture policies found Mar 17 18:48:41.026315 kernel: clk: Disabling unused clocks Mar 17 18:48:41.026322 kernel: Freeing unused kernel memory: 36416K Mar 17 18:48:41.026329 kernel: Run /init as init process Mar 17 18:48:41.026336 kernel: with arguments: Mar 17 18:48:41.026342 kernel: /init Mar 17 18:48:41.026349 kernel: with environment: Mar 17 18:48:41.026355 kernel: HOME=/ Mar 17 18:48:41.026362 kernel: TERM=linux Mar 17 18:48:41.026369 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:48:41.026378 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:48:41.026389 systemd[1]: Detected virtualization microsoft. Mar 17 18:48:41.026396 systemd[1]: Detected architecture arm64. Mar 17 18:48:41.026403 systemd[1]: Running in initrd. Mar 17 18:48:41.026410 systemd[1]: No hostname configured, using default hostname. Mar 17 18:48:41.026416 systemd[1]: Hostname set to . Mar 17 18:48:41.026424 systemd[1]: Initializing machine ID from random generator. Mar 17 18:48:41.026431 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:48:41.026439 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:48:41.026446 systemd[1]: Reached target cryptsetup.target. Mar 17 18:48:41.026453 systemd[1]: Reached target paths.target. Mar 17 18:48:41.026460 systemd[1]: Reached target slices.target. Mar 17 18:48:41.026467 systemd[1]: Reached target swap.target. Mar 17 18:48:41.026474 systemd[1]: Reached target timers.target. Mar 17 18:48:41.026481 systemd[1]: Listening on iscsid.socket. Mar 17 18:48:41.026489 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:48:41.026497 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:48:41.026505 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:48:41.026513 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:48:41.026520 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:48:41.026527 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:48:41.026534 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:48:41.026541 systemd[1]: Reached target sockets.target. Mar 17 18:48:41.026548 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:48:41.026555 systemd[1]: Finished network-cleanup.service. Mar 17 18:48:41.026564 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:48:41.026571 systemd[1]: Starting systemd-journald.service... Mar 17 18:48:41.026578 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:48:41.026585 systemd[1]: Starting systemd-resolved.service... Mar 17 18:48:41.026592 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:48:41.026603 systemd-journald[276]: Journal started Mar 17 18:48:41.026645 systemd-journald[276]: Runtime Journal (/run/log/journal/c9a3cfb04a0f4fbda099294bc2ee4b4d) is 8.0M, max 78.5M, 70.5M free. Mar 17 18:48:41.008334 systemd-modules-load[277]: Inserted module 'overlay' Mar 17 18:48:41.050892 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:48:41.058859 systemd-modules-load[277]: Inserted module 'br_netfilter' Mar 17 18:48:41.071226 kernel: Bridge firewalling registered Mar 17 18:48:41.071248 systemd[1]: Started systemd-journald.service. Mar 17 18:48:41.068149 systemd-resolved[278]: Positive Trust Anchors: Mar 17 18:48:41.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:41.068158 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:48:41.213058 kernel: audit: type=1130 audit(1742237321.076:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:41.213111 kernel: SCSI subsystem initialized Mar 17 18:48:41.213131 kernel: audit: type=1130 audit(1742237321.100:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:41.213149 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:48:41.213166 kernel: audit: type=1130 audit(1742237321.124:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:41.213184 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:48:41.213198 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:48:41.213209 kernel: audit: type=1130 audit(1742237321.161:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:41.213218 kernel: audit: type=1130 audit(1742237321.187:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:41.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:41.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:41.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:41.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:41.068189 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:48:41.265955 kernel: audit: type=1130 audit(1742237321.217:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:41.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:41.070446 systemd-resolved[278]: Defaulting to hostname 'linux'. Mar 17 18:48:41.076481 systemd[1]: Started systemd-resolved.service. Mar 17 18:48:41.100956 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:48:41.125169 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:48:41.161394 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:48:41.338167 kernel: audit: type=1130 audit(1742237321.314:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:41.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:41.183369 systemd-modules-load[277]: Inserted module 'dm_multipath' Mar 17 18:48:41.188270 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:48:41.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:41.217930 systemd[1]: Reached target nss-lookup.target. Mar 17 18:48:41.271301 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:48:41.396320 kernel: audit: type=1130 audit(1742237321.338:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:41.396344 kernel: audit: type=1130 audit(1742237321.364:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:41.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:41.279900 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:48:41.400684 dracut-cmdline[300]: dracut-dracut-053 Mar 17 18:48:41.288183 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:48:41.301925 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:48:41.414566 dracut-cmdline[300]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:48:41.314423 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:48:41.338525 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:48:41.365132 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:48:41.479097 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:48:41.495105 kernel: iscsi: registered transport (tcp) Mar 17 18:48:41.516632 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:48:41.516691 kernel: QLogic iSCSI HBA Driver Mar 17 18:48:41.552534 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:48:41.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:41.562158 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:48:41.615107 kernel: raid6: neonx8 gen() 13751 MB/s Mar 17 18:48:41.632100 kernel: raid6: neonx8 xor() 10789 MB/s Mar 17 18:48:41.653102 kernel: raid6: neonx4 gen() 13505 MB/s Mar 17 18:48:41.673105 kernel: raid6: neonx4 xor() 11263 MB/s Mar 17 18:48:41.694096 kernel: raid6: neonx2 gen() 13007 MB/s Mar 17 18:48:41.715096 kernel: raid6: neonx2 xor() 10522 MB/s Mar 17 18:48:41.735119 kernel: raid6: neonx1 gen() 10532 MB/s Mar 17 18:48:41.756109 kernel: raid6: neonx1 xor() 8772 MB/s Mar 17 18:48:41.777101 kernel: raid6: int64x8 gen() 6250 MB/s Mar 17 18:48:41.814086 kernel: raid6: int64x8 xor() 3529 MB/s Mar 17 18:48:41.824103 kernel: raid6: int64x4 gen() 7189 MB/s Mar 17 18:48:41.839092 kernel: raid6: int64x4 xor() 3855 MB/s Mar 17 18:48:41.859088 kernel: raid6: int64x2 gen() 6155 MB/s Mar 17 18:48:41.880090 kernel: raid6: int64x2 xor() 3320 MB/s Mar 17 18:48:41.900088 kernel: raid6: int64x1 gen() 5044 MB/s Mar 17 18:48:41.924348 kernel: raid6: int64x1 xor() 2645 MB/s Mar 17 18:48:41.924361 kernel: raid6: using algorithm neonx8 gen() 13751 MB/s Mar 17 18:48:41.924369 kernel: raid6: .... xor() 10789 MB/s, rmw enabled Mar 17 18:48:41.928422 kernel: raid6: using neon recovery algorithm Mar 17 18:48:41.951040 kernel: xor: measuring software checksum speed Mar 17 18:48:41.951066 kernel: 8regs : 17181 MB/sec Mar 17 18:48:41.955093 kernel: 32regs : 19750 MB/sec Mar 17 18:48:41.963648 kernel: arm64_neon : 25923 MB/sec Mar 17 18:48:41.963660 kernel: xor: using function: arm64_neon (25923 MB/sec) Mar 17 18:48:42.020097 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Mar 17 18:48:42.031302 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:48:42.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:42.039000 audit: BPF prog-id=7 op=LOAD Mar 17 18:48:42.039000 audit: BPF prog-id=8 op=LOAD Mar 17 18:48:42.040407 systemd[1]: Starting systemd-udevd.service... Mar 17 18:48:42.059218 systemd-udevd[476]: Using default interface naming scheme 'v252'. Mar 17 18:48:42.065602 systemd[1]: Started systemd-udevd.service. Mar 17 18:48:42.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:42.076584 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:48:42.094264 dracut-pre-trigger[495]: rd.md=0: removing MD RAID activation Mar 17 18:48:42.129429 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:48:42.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:42.135272 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:48:42.173894 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:48:42.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:42.230107 kernel: hv_vmbus: Vmbus version:5.3 Mar 17 18:48:42.243104 kernel: hv_vmbus: registering driver hid_hyperv Mar 17 18:48:42.243152 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 17 18:48:42.262676 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 17 18:48:42.262729 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 17 18:48:42.268148 kernel: hv_vmbus: registering driver hv_storvsc Mar 17 18:48:42.271710 kernel: scsi host0: storvsc_host_t Mar 17 18:48:42.271776 kernel: hv_vmbus: registering driver hv_netvsc Mar 17 18:48:42.283016 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 17 18:48:42.300311 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 17 18:48:42.300366 kernel: scsi host1: storvsc_host_t Mar 17 18:48:42.301102 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Mar 17 18:48:42.332102 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 17 18:48:42.355755 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 18:48:42.355770 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 17 18:48:42.369035 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 17 18:48:42.369244 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 17 18:48:42.369336 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 17 18:48:42.369423 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 17 18:48:42.369512 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 17 18:48:42.369607 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:48:42.369617 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 17 18:48:42.393111 kernel: hv_netvsc 002248bc-60e7-0022-48bc-60e7002248bc eth0: VF slot 1 added Mar 17 18:48:42.401112 kernel: hv_vmbus: registering driver hv_pci Mar 17 18:48:42.413246 kernel: hv_pci 8e3a6965-ab97-440b-b2a9-9c5f4c37a9dc: PCI VMBus probing: Using version 0x10004 Mar 17 18:48:42.518070 kernel: hv_pci 8e3a6965-ab97-440b-b2a9-9c5f4c37a9dc: PCI host bridge to bus ab97:00 Mar 17 18:48:42.518190 kernel: pci_bus ab97:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 17 18:48:42.518289 kernel: pci_bus ab97:00: No busn resource found for root bus, will use [bus 00-ff] Mar 17 18:48:42.518360 kernel: pci ab97:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 17 18:48:42.518458 kernel: pci ab97:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 17 18:48:42.518537 kernel: pci ab97:00:02.0: enabling Extended Tags Mar 17 18:48:42.518613 kernel: pci ab97:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ab97:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 17 18:48:42.518690 kernel: pci_bus ab97:00: busn_res: [bus 00-ff] end is updated to 00 Mar 17 18:48:42.518761 kernel: pci ab97:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 17 18:48:42.556097 kernel: mlx5_core ab97:00:02.0: firmware version: 16.30.1284 Mar 17 18:48:42.804321 kernel: mlx5_core ab97:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Mar 17 18:48:42.804432 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (535) Mar 17 18:48:42.804442 kernel: hv_netvsc 002248bc-60e7-0022-48bc-60e7002248bc eth0: VF registering: eth1 Mar 17 18:48:42.804523 kernel: mlx5_core ab97:00:02.0 eth1: joined to eth0 Mar 17 18:48:42.772522 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:48:42.806803 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:48:42.828110 kernel: mlx5_core ab97:00:02.0 enP43927s1: renamed from eth1 Mar 17 18:48:42.951560 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:48:42.981810 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:48:42.988069 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:48:43.002056 systemd[1]: Starting disk-uuid.service... Mar 17 18:48:43.029117 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:48:44.045108 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:48:44.045272 disk-uuid[599]: The operation has completed successfully. Mar 17 18:48:44.106736 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:48:44.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:44.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:44.106833 systemd[1]: Finished disk-uuid.service. Mar 17 18:48:44.116460 systemd[1]: Starting verity-setup.service... Mar 17 18:48:44.165113 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 18:48:44.376537 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:48:44.383200 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:48:44.394375 systemd[1]: Finished verity-setup.service. Mar 17 18:48:44.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:44.452807 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:48:44.460882 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:48:44.457336 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:48:44.458232 systemd[1]: Starting ignition-setup.service... Mar 17 18:48:44.466001 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:48:44.507423 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:48:44.507486 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:48:44.512429 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:48:44.555050 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:48:44.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:44.564000 audit: BPF prog-id=9 op=LOAD Mar 17 18:48:44.565099 systemd[1]: Starting systemd-networkd.service... Mar 17 18:48:44.590503 systemd-networkd[867]: lo: Link UP Mar 17 18:48:44.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:44.590510 systemd-networkd[867]: lo: Gained carrier Mar 17 18:48:44.590896 systemd-networkd[867]: Enumeration completed Mar 17 18:48:44.591238 systemd[1]: Started systemd-networkd.service. Mar 17 18:48:44.596430 systemd-networkd[867]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:48:44.597289 systemd[1]: Reached target network.target. Mar 17 18:48:44.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:44.612027 systemd[1]: Starting iscsiuio.service... Mar 17 18:48:44.643212 iscsid[876]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:48:44.643212 iscsid[876]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Mar 17 18:48:44.643212 iscsid[876]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:48:44.643212 iscsid[876]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:48:44.643212 iscsid[876]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:48:44.643212 iscsid[876]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:48:44.643212 iscsid[876]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:48:44.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:44.629666 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:48:44.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:44.630035 systemd[1]: Started iscsiuio.service. Mar 17 18:48:44.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:44.636020 systemd[1]: Starting iscsid.service... Mar 17 18:48:44.654562 systemd[1]: Started iscsid.service. Mar 17 18:48:44.679095 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:48:44.728168 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:48:44.733490 systemd[1]: Finished ignition-setup.service. Mar 17 18:48:44.742650 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:48:44.750716 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:48:44.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:44.759517 systemd[1]: Reached target remote-fs.target. Mar 17 18:48:44.768551 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:48:44.787233 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:48:44.792766 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:48:44.835101 kernel: mlx5_core ab97:00:02.0 enP43927s1: Link up Mar 17 18:48:44.878147 kernel: hv_netvsc 002248bc-60e7-0022-48bc-60e7002248bc eth0: Data path switched to VF: enP43927s1 Mar 17 18:48:44.878366 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:48:44.878486 systemd-networkd[867]: enP43927s1: Link UP Mar 17 18:48:44.878671 systemd-networkd[867]: eth0: Link UP Mar 17 18:48:44.879147 systemd-networkd[867]: eth0: Gained carrier Mar 17 18:48:44.891593 systemd-networkd[867]: enP43927s1: Gained carrier Mar 17 18:48:44.905144 systemd-networkd[867]: eth0: DHCPv4 address 10.200.20.41/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 18:48:46.196266 systemd-networkd[867]: eth0: Gained IPv6LL Mar 17 18:48:47.626153 ignition[888]: Ignition 2.14.0 Mar 17 18:48:47.626173 ignition[888]: Stage: fetch-offline Mar 17 18:48:47.626250 ignition[888]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:47.626275 ignition[888]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:47.705801 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:47.711721 ignition[888]: parsed url from cmdline: "" Mar 17 18:48:47.711725 ignition[888]: no config URL provided Mar 17 18:48:47.711734 ignition[888]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:48:47.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:47.712795 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:48:47.752325 kernel: kauditd_printk_skb: 18 callbacks suppressed Mar 17 18:48:47.752349 kernel: audit: type=1130 audit(1742237327.720:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:47.711746 ignition[888]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:48:47.731235 systemd[1]: Starting ignition-fetch.service... Mar 17 18:48:47.711752 ignition[888]: failed to fetch config: resource requires networking Mar 17 18:48:47.711884 ignition[888]: Ignition finished successfully Mar 17 18:48:47.740522 ignition[897]: Ignition 2.14.0 Mar 17 18:48:47.740528 ignition[897]: Stage: fetch Mar 17 18:48:47.740703 ignition[897]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:47.740732 ignition[897]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:47.743610 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:47.743739 ignition[897]: parsed url from cmdline: "" Mar 17 18:48:47.743743 ignition[897]: no config URL provided Mar 17 18:48:47.743747 ignition[897]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:48:47.743755 ignition[897]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:48:47.743783 ignition[897]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 17 18:48:47.873627 ignition[897]: GET result: OK Mar 17 18:48:47.873700 ignition[897]: config has been read from IMDS userdata Mar 17 18:48:47.877042 unknown[897]: fetched base config from "system" Mar 17 18:48:47.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:47.873740 ignition[897]: parsing config with SHA512: 94230e5c1e86c25ff3e3487bbe81f9aee2d7804fc8d080c7a0d7224549359330edf97b6e946bc4c7df5873ecd96f5551f94114c571bd8c9e79baeb6d32318e2e Mar 17 18:48:47.877049 unknown[897]: fetched base config from "system" Mar 17 18:48:47.917211 kernel: audit: type=1130 audit(1742237327.888:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:47.877583 ignition[897]: fetch: fetch complete Mar 17 18:48:47.877054 unknown[897]: fetched user config from "azure" Mar 17 18:48:47.877589 ignition[897]: fetch: fetch passed Mar 17 18:48:47.884106 systemd[1]: Finished ignition-fetch.service. Mar 17 18:48:47.955865 kernel: audit: type=1130 audit(1742237327.933:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:47.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:47.877636 ignition[897]: Ignition finished successfully Mar 17 18:48:47.907278 systemd[1]: Starting ignition-kargs.service... Mar 17 18:48:47.918974 ignition[903]: Ignition 2.14.0 Mar 17 18:48:47.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:47.926420 systemd[1]: Finished ignition-kargs.service. Mar 17 18:48:47.995186 kernel: audit: type=1130 audit(1742237327.968:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:47.918980 ignition[903]: Stage: kargs Mar 17 18:48:47.935052 systemd[1]: Starting ignition-disks.service... Mar 17 18:48:47.919111 ignition[903]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:47.961827 systemd[1]: Finished ignition-disks.service. Mar 17 18:48:47.919130 ignition[903]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:47.968479 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:48:47.921903 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:47.991328 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:48:47.925335 ignition[903]: kargs: kargs passed Mar 17 18:48:47.999453 systemd[1]: Reached target local-fs.target. Mar 17 18:48:47.925404 ignition[903]: Ignition finished successfully Mar 17 18:48:48.006384 systemd[1]: Reached target sysinit.target. Mar 17 18:48:47.944973 ignition[909]: Ignition 2.14.0 Mar 17 18:48:48.014208 systemd[1]: Reached target basic.target. Mar 17 18:48:47.944980 ignition[909]: Stage: disks Mar 17 18:48:48.025188 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:48:47.945106 ignition[909]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:47.945125 ignition[909]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:47.947856 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:47.955255 ignition[909]: disks: disks passed Mar 17 18:48:47.956227 ignition[909]: Ignition finished successfully Mar 17 18:48:48.123744 systemd-fsck[917]: ROOT: clean, 623/7326000 files, 481077/7359488 blocks Mar 17 18:48:48.137139 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:48:48.164266 kernel: audit: type=1130 audit(1742237328.141:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:48.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:48.142692 systemd[1]: Mounting sysroot.mount... Mar 17 18:48:48.177094 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:48:48.177980 systemd[1]: Mounted sysroot.mount. Mar 17 18:48:48.181894 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:48:48.225704 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:48:48.234127 systemd[1]: Starting flatcar-metadata-hostname.service... Mar 17 18:48:48.238742 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:48:48.238794 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:48:48.249117 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:48:48.317112 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:48:48.322463 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:48:48.343101 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (928) Mar 17 18:48:48.350129 initrd-setup-root[933]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:48:48.361295 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:48:48.361318 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:48:48.366109 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:48:48.371513 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:48:48.384850 initrd-setup-root[959]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:48:48.406912 initrd-setup-root[967]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:48:48.416497 initrd-setup-root[975]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:48:49.025690 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:48:49.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:49.049274 systemd[1]: Starting ignition-mount.service... Mar 17 18:48:49.061417 kernel: audit: type=1130 audit(1742237329.030:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:49.058905 systemd[1]: Starting sysroot-boot.service... Mar 17 18:48:49.071523 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Mar 17 18:48:49.071652 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Mar 17 18:48:49.155833 systemd[1]: Finished sysroot-boot.service. Mar 17 18:48:49.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:49.181094 kernel: audit: type=1130 audit(1742237329.160:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:49.188124 ignition[997]: INFO : Ignition 2.14.0 Mar 17 18:48:49.188124 ignition[997]: INFO : Stage: mount Mar 17 18:48:49.198176 ignition[997]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:49.198176 ignition[997]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:49.198176 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:49.224899 ignition[997]: INFO : mount: mount passed Mar 17 18:48:49.224899 ignition[997]: INFO : Ignition finished successfully Mar 17 18:48:49.254362 kernel: audit: type=1130 audit(1742237329.229:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:49.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:49.205842 systemd[1]: Finished ignition-mount.service. Mar 17 18:48:49.679316 coreos-metadata[927]: Mar 17 18:48:49.679 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 17 18:48:49.689300 coreos-metadata[927]: Mar 17 18:48:49.689 INFO Fetch successful Mar 17 18:48:49.722796 coreos-metadata[927]: Mar 17 18:48:49.722 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 17 18:48:49.746514 coreos-metadata[927]: Mar 17 18:48:49.746 INFO Fetch successful Mar 17 18:48:49.760692 coreos-metadata[927]: Mar 17 18:48:49.760 INFO wrote hostname ci-3510.3.7-a-2552a29e1b to /sysroot/etc/hostname Mar 17 18:48:49.770134 systemd[1]: Finished flatcar-metadata-hostname.service. Mar 17 18:48:49.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:49.795767 systemd[1]: Starting ignition-files.service... Mar 17 18:48:49.805436 kernel: audit: type=1130 audit(1742237329.775:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:49.807899 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:48:49.828096 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1006) Mar 17 18:48:49.839972 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:48:49.839990 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:48:49.840000 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:48:49.851641 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:48:49.868320 ignition[1025]: INFO : Ignition 2.14.0 Mar 17 18:48:49.868320 ignition[1025]: INFO : Stage: files Mar 17 18:48:49.879353 ignition[1025]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:49.879353 ignition[1025]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:49.879353 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:49.879353 ignition[1025]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:48:49.879353 ignition[1025]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:48:49.879353 ignition[1025]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:48:50.273181 ignition[1025]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:48:50.281536 ignition[1025]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:48:50.281536 ignition[1025]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:48:50.281010 unknown[1025]: wrote ssh authorized keys file for user: core Mar 17 18:48:50.301962 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 17 18:48:50.301962 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Mar 17 18:48:50.746467 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 18:48:50.882651 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 17 18:48:50.893320 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:48:50.893320 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 18:48:51.317423 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 18:48:51.390302 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:48:51.400343 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:48:51.400343 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:48:51.400343 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:48:51.400343 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:48:51.400343 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:48:51.400343 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:48:51.400343 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:48:51.400343 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:48:51.400343 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:48:51.400343 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:48:51.400343 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 18:48:51.400343 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 18:48:51.400343 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Mar 17 18:48:51.400343 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:48:51.400343 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2077677389" Mar 17 18:48:51.560385 ignition[1025]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2077677389": device or resource busy Mar 17 18:48:51.560385 ignition[1025]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2077677389", trying btrfs: device or resource busy Mar 17 18:48:51.560385 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2077677389" Mar 17 18:48:51.560385 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2077677389" Mar 17 18:48:51.560385 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem2077677389" Mar 17 18:48:51.560385 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem2077677389" Mar 17 18:48:51.560385 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Mar 17 18:48:51.560385 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Mar 17 18:48:51.560385 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:48:51.560385 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3173186798" Mar 17 18:48:51.560385 ignition[1025]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3173186798": device or resource busy Mar 17 18:48:51.560385 ignition[1025]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3173186798", trying btrfs: device or resource busy Mar 17 18:48:51.560385 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3173186798" Mar 17 18:48:51.560385 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3173186798" Mar 17 18:48:51.411113 systemd[1]: mnt-oem2077677389.mount: Deactivated successfully. Mar 17 18:48:51.721752 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem3173186798" Mar 17 18:48:51.721752 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem3173186798" Mar 17 18:48:51.721752 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Mar 17 18:48:51.721752 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 18:48:51.721752 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Mar 17 18:48:51.434175 systemd[1]: mnt-oem3173186798.mount: Deactivated successfully. Mar 17 18:48:51.877704 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Mar 17 18:48:52.528381 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 18:48:52.528381 ignition[1025]: INFO : files: op(14): [started] processing unit "waagent.service" Mar 17 18:48:52.528381 ignition[1025]: INFO : files: op(14): [finished] processing unit "waagent.service" Mar 17 18:48:52.528381 ignition[1025]: INFO : files: op(15): [started] processing unit "nvidia.service" Mar 17 18:48:52.528381 ignition[1025]: INFO : files: op(15): [finished] processing unit "nvidia.service" Mar 17 18:48:52.528381 ignition[1025]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Mar 17 18:48:52.602126 kernel: audit: type=1130 audit(1742237332.551:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:52.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:52.602218 ignition[1025]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:48:52.602218 ignition[1025]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:48:52.602218 ignition[1025]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Mar 17 18:48:52.602218 ignition[1025]: INFO : files: op(18): [started] setting preset to enabled for "waagent.service" Mar 17 18:48:52.602218 ignition[1025]: INFO : files: op(18): [finished] setting preset to enabled for "waagent.service" Mar 17 18:48:52.602218 ignition[1025]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Mar 17 18:48:52.602218 ignition[1025]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Mar 17 18:48:52.602218 ignition[1025]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:48:52.602218 ignition[1025]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:48:52.602218 ignition[1025]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:48:52.602218 ignition[1025]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:48:52.602218 ignition[1025]: INFO : files: files passed Mar 17 18:48:52.602218 ignition[1025]: INFO : Ignition finished successfully Mar 17 18:48:52.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:52.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:52.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:52.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:52.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:52.541656 systemd[1]: Finished ignition-files.service. Mar 17 18:48:52.554474 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:48:52.756836 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:48:52.789970 kernel: kauditd_printk_skb: 5 callbacks suppressed Mar 17 18:48:52.789998 kernel: audit: type=1130 audit(1742237332.761:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:52.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:52.581211 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:48:52.584219 systemd[1]: Starting ignition-quench.service... Mar 17 18:48:52.596599 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:48:52.596714 systemd[1]: Finished ignition-quench.service. Mar 17 18:48:52.644118 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:48:52.856174 kernel: audit: type=1131 audit(1742237332.833:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:52.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:52.656941 systemd[1]: Reached target ignition-complete.target. Mar 17 18:48:52.670611 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:48:52.708703 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:48:52.708809 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:48:52.713833 systemd[1]: Reached target initrd-fs.target. Mar 17 18:48:52.727323 systemd[1]: Reached target initrd.target. Mar 17 18:48:52.735301 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:48:52.736216 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:48:52.754415 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:48:52.762221 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:48:52.800472 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:48:52.806506 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:48:52.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:52.815772 systemd[1]: Stopped target timers.target. Mar 17 18:48:52.980049 kernel: audit: type=1131 audit(1742237332.953:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:52.825142 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:48:52.825206 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:48:52.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:52.833459 systemd[1]: Stopped target initrd.target. Mar 17 18:48:52.857206 systemd[1]: Stopped target basic.target. Mar 17 18:48:53.039271 kernel: audit: type=1131 audit(1742237332.989:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.039293 kernel: audit: type=1131 audit(1742237333.018:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:52.866013 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:48:52.875161 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:48:53.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:52.884726 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:48:53.098404 kernel: audit: type=1131 audit(1742237333.043:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.098436 kernel: audit: type=1131 audit(1742237333.052:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:52.893381 systemd[1]: Stopped target remote-fs.target. Mar 17 18:48:53.105346 iscsid[876]: iscsid shutting down. Mar 17 18:48:53.143215 kernel: audit: type=1131 audit(1742237333.114:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.143241 kernel: audit: type=1131 audit(1742237333.138:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.143318 ignition[1063]: INFO : Ignition 2.14.0 Mar 17 18:48:53.143318 ignition[1063]: INFO : Stage: umount Mar 17 18:48:53.143318 ignition[1063]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:53.143318 ignition[1063]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:53.143318 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:53.143318 ignition[1063]: INFO : umount: umount passed Mar 17 18:48:53.143318 ignition[1063]: INFO : Ignition finished successfully Mar 17 18:48:53.232861 kernel: audit: type=1131 audit(1742237333.161:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:52.901338 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:48:53.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:52.909998 systemd[1]: Stopped target sysinit.target. Mar 17 18:48:53.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:52.921154 systemd[1]: Stopped target local-fs.target. Mar 17 18:48:52.929202 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:48:52.937494 systemd[1]: Stopped target swap.target. Mar 17 18:48:52.945315 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:48:52.945380 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:48:52.953512 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:48:52.980205 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:48:53.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:52.980274 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:48:53.009322 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:48:53.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.009374 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:48:53.018303 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:48:53.018346 systemd[1]: Stopped ignition-files.service. Mar 17 18:48:53.043373 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 18:48:53.043426 systemd[1]: Stopped flatcar-metadata-hostname.service. Mar 17 18:48:53.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.070896 systemd[1]: Stopping ignition-mount.service... Mar 17 18:48:53.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.095477 systemd[1]: Stopping iscsid.service... Mar 17 18:48:53.382000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:48:53.102410 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:48:53.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.109437 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:48:53.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.109506 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:48:53.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.114512 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:48:53.114555 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:48:53.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.139466 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:48:53.139566 systemd[1]: Stopped iscsid.service. Mar 17 18:48:53.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.161445 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:48:53.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.161536 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:48:53.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.192503 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:48:53.506369 kernel: hv_netvsc 002248bc-60e7-0022-48bc-60e7002248bc eth0: Data path switched from VF: enP43927s1 Mar 17 18:48:53.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.192600 systemd[1]: Stopped ignition-mount.service. Mar 17 18:48:53.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.197316 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:48:53.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.197375 systemd[1]: Stopped ignition-disks.service. Mar 17 18:48:53.220310 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:48:53.220368 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:48:53.228369 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 18:48:53.228413 systemd[1]: Stopped ignition-fetch.service. Mar 17 18:48:53.237104 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:48:53.237145 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:48:53.245808 systemd[1]: Stopped target paths.target. Mar 17 18:48:53.253793 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:48:53.261559 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:48:53.266841 systemd[1]: Stopped target slices.target. Mar 17 18:48:53.274306 systemd[1]: Stopped target sockets.target. Mar 17 18:48:53.283392 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:48:53.283439 systemd[1]: Closed iscsid.socket. Mar 17 18:48:53.291678 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:48:53.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:53.291732 systemd[1]: Stopped ignition-setup.service. Mar 17 18:48:53.299920 systemd[1]: Stopping iscsiuio.service... Mar 17 18:48:53.310713 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:48:53.311195 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:48:53.311284 systemd[1]: Stopped iscsiuio.service. Mar 17 18:48:53.317901 systemd[1]: Stopped target network.target. Mar 17 18:48:53.327673 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:48:53.327713 systemd[1]: Closed iscsiuio.socket. Mar 17 18:48:53.336183 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:48:53.345957 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:48:53.353128 systemd-networkd[867]: eth0: DHCPv6 lease lost Mar 17 18:48:53.640000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:48:53.358448 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:48:53.358544 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:48:53.365908 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:48:53.365998 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:48:53.374907 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:48:53.374953 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:48:53.383535 systemd[1]: Stopping network-cleanup.service... Mar 17 18:48:53.390722 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:48:53.390795 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:48:53.395937 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:48:53.395989 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:48:53.409464 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:48:53.409516 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:48:53.414610 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:48:53.424886 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:48:53.425466 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:48:53.425602 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:48:53.434669 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:48:53.434723 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:48:53.442834 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:48:53.442871 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:48:53.447903 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:48:53.447960 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:48:53.457265 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:48:53.457315 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:48:53.464786 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:48:53.464834 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:48:53.475199 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:48:53.482872 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 18:48:53.482939 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Mar 17 18:48:53.488274 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:48:53.488318 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:48:53.499522 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:48:53.499575 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:48:53.512057 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 18:48:53.512578 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:48:53.512686 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:48:53.587807 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:48:53.587923 systemd[1]: Stopped network-cleanup.service. Mar 17 18:48:54.103337 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:48:54.103453 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:48:54.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:54.112586 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:48:54.120829 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:48:54.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:54.120900 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:48:54.130796 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:48:54.149944 systemd[1]: Switching root. Mar 17 18:48:54.176452 systemd-journald[276]: Journal stopped Mar 17 18:49:06.415544 systemd-journald[276]: Received SIGTERM from PID 1 (systemd). Mar 17 18:49:06.415566 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:49:06.415578 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:49:06.415588 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:49:06.415597 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:49:06.415605 kernel: SELinux: policy capability open_perms=1 Mar 17 18:49:06.415614 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:49:06.415622 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:49:06.415630 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:49:06.415638 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:49:06.415648 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:49:06.415656 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:49:06.415665 systemd[1]: Successfully loaded SELinux policy in 259.289ms. Mar 17 18:49:06.415675 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.007ms. Mar 17 18:49:06.415687 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:49:06.415697 systemd[1]: Detected virtualization microsoft. Mar 17 18:49:06.415705 systemd[1]: Detected architecture arm64. Mar 17 18:49:06.415714 systemd[1]: Detected first boot. Mar 17 18:49:06.415724 systemd[1]: Hostname set to . Mar 17 18:49:06.415732 systemd[1]: Initializing machine ID from random generator. Mar 17 18:49:06.415742 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:49:06.415750 kernel: kauditd_printk_skb: 35 callbacks suppressed Mar 17 18:49:06.415761 kernel: audit: type=1400 audit(1742237339.336:89): avc: denied { associate } for pid=1097 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:49:06.415772 kernel: audit: type=1300 audit(1742237339.336:89): arch=c00000b7 syscall=5 success=yes exit=0 a0=4000022802 a1=4000028ae0 a2=4000026d00 a3=32 items=0 ppid=1080 pid=1097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:49:06.415782 kernel: audit: type=1327 audit(1742237339.336:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:49:06.415791 kernel: audit: type=1400 audit(1742237339.345:90): avc: denied { associate } for pid=1097 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:49:06.415801 kernel: audit: type=1300 audit(1742237339.345:90): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000228d9 a2=1ed a3=0 items=2 ppid=1080 pid=1097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:49:06.415811 kernel: audit: type=1307 audit(1742237339.345:90): cwd="/" Mar 17 18:49:06.415820 kernel: audit: type=1302 audit(1742237339.345:90): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:06.415829 kernel: audit: type=1302 audit(1742237339.345:90): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:06.415838 kernel: audit: type=1327 audit(1742237339.345:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:49:06.415847 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:49:06.415857 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:49:06.415867 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:49:06.415879 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:49:06.415888 kernel: audit: type=1334 audit(1742237345.694:91): prog-id=12 op=LOAD Mar 17 18:49:06.415897 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 18:49:06.415905 kernel: audit: type=1334 audit(1742237345.694:92): prog-id=3 op=UNLOAD Mar 17 18:49:06.415914 systemd[1]: Stopped initrd-switch-root.service. Mar 17 18:49:06.415924 kernel: audit: type=1334 audit(1742237345.694:93): prog-id=13 op=LOAD Mar 17 18:49:06.415935 kernel: audit: type=1334 audit(1742237345.694:94): prog-id=14 op=LOAD Mar 17 18:49:06.415945 kernel: audit: type=1334 audit(1742237345.694:95): prog-id=4 op=UNLOAD Mar 17 18:49:06.415954 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 18:49:06.415963 kernel: audit: type=1334 audit(1742237345.694:96): prog-id=5 op=UNLOAD Mar 17 18:49:06.415973 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:49:06.415984 kernel: audit: type=1131 audit(1742237345.695:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.415993 kernel: audit: type=1334 audit(1742237345.709:98): prog-id=12 op=UNLOAD Mar 17 18:49:06.416002 kernel: audit: type=1130 audit(1742237345.740:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.416012 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:49:06.416022 kernel: audit: type=1131 audit(1742237345.741:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.416031 systemd[1]: Created slice system-getty.slice. Mar 17 18:49:06.416040 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:49:06.416050 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:49:06.416060 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:49:06.416069 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:49:06.416094 systemd[1]: Created slice user.slice. Mar 17 18:49:06.416121 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:49:06.416137 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:49:06.416150 systemd[1]: Set up automount boot.automount. Mar 17 18:49:06.416161 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:49:06.416173 systemd[1]: Stopped target initrd-switch-root.target. Mar 17 18:49:06.416185 systemd[1]: Stopped target initrd-fs.target. Mar 17 18:49:06.416196 systemd[1]: Stopped target initrd-root-fs.target. Mar 17 18:49:06.416208 systemd[1]: Reached target integritysetup.target. Mar 17 18:49:06.416219 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:49:06.416234 systemd[1]: Reached target remote-fs.target. Mar 17 18:49:06.416246 systemd[1]: Reached target slices.target. Mar 17 18:49:06.416258 systemd[1]: Reached target swap.target. Mar 17 18:49:06.416270 systemd[1]: Reached target torcx.target. Mar 17 18:49:06.416282 systemd[1]: Reached target veritysetup.target. Mar 17 18:49:06.416293 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:49:06.416306 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:49:06.416317 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:49:06.416328 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:49:06.416339 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:49:06.416351 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:49:06.416363 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:49:06.416375 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:49:06.416388 systemd[1]: Mounting media.mount... Mar 17 18:49:06.416400 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:49:06.416411 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:49:06.416423 systemd[1]: Mounting tmp.mount... Mar 17 18:49:06.416451 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:49:06.416461 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:49:06.416471 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:49:06.416481 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:49:06.416491 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:49:06.416502 systemd[1]: Starting modprobe@drm.service... Mar 17 18:49:06.416512 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:49:06.416521 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:49:06.416531 systemd[1]: Starting modprobe@loop.service... Mar 17 18:49:06.416541 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:49:06.416551 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 18:49:06.416560 systemd[1]: Stopped systemd-fsck-root.service. Mar 17 18:49:06.416570 kernel: loop: module loaded Mar 17 18:49:06.416579 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 18:49:06.416589 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 18:49:06.416599 systemd[1]: Stopped systemd-journald.service. Mar 17 18:49:06.416608 systemd[1]: systemd-journald.service: Consumed 3.098s CPU time. Mar 17 18:49:06.416618 systemd[1]: Starting systemd-journald.service... Mar 17 18:49:06.416627 kernel: fuse: init (API version 7.34) Mar 17 18:49:06.416636 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:49:06.416646 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:49:06.416655 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:49:06.416665 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:49:06.416675 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 18:49:06.416685 systemd[1]: Stopped verity-setup.service. Mar 17 18:49:06.416695 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:49:06.416705 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:49:06.416719 systemd-journald[1186]: Journal started Mar 17 18:49:06.416765 systemd-journald[1186]: Runtime Journal (/run/log/journal/cfb024412a72469b94372680c77f5e5f) is 8.0M, max 78.5M, 70.5M free. Mar 17 18:48:56.290000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:48:57.037000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:48:57.037000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:48:57.037000 audit: BPF prog-id=10 op=LOAD Mar 17 18:48:57.037000 audit: BPF prog-id=10 op=UNLOAD Mar 17 18:48:57.037000 audit: BPF prog-id=11 op=LOAD Mar 17 18:48:57.037000 audit: BPF prog-id=11 op=UNLOAD Mar 17 18:48:59.336000 audit[1097]: AVC avc: denied { associate } for pid=1097 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:48:59.336000 audit[1097]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=4000022802 a1=4000028ae0 a2=4000026d00 a3=32 items=0 ppid=1080 pid=1097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:48:59.336000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:48:59.345000 audit[1097]: AVC avc: denied { associate } for pid=1097 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:48:59.345000 audit[1097]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000228d9 a2=1ed a3=0 items=2 ppid=1080 pid=1097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:48:59.345000 audit: CWD cwd="/" Mar 17 18:48:59.345000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:59.345000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:59.345000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:49:05.694000 audit: BPF prog-id=12 op=LOAD Mar 17 18:49:05.694000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:49:05.694000 audit: BPF prog-id=13 op=LOAD Mar 17 18:49:05.694000 audit: BPF prog-id=14 op=LOAD Mar 17 18:49:05.694000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:49:05.694000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:49:05.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:05.709000 audit: BPF prog-id=12 op=UNLOAD Mar 17 18:49:05.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:05.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.291000 audit: BPF prog-id=15 op=LOAD Mar 17 18:49:06.291000 audit: BPF prog-id=16 op=LOAD Mar 17 18:49:06.291000 audit: BPF prog-id=17 op=LOAD Mar 17 18:49:06.291000 audit: BPF prog-id=13 op=UNLOAD Mar 17 18:49:06.291000 audit: BPF prog-id=14 op=UNLOAD Mar 17 18:49:06.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.412000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:49:06.412000 audit[1186]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffc9197040 a2=4000 a3=1 items=0 ppid=1 pid=1186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:49:06.412000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:49:05.692970 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:48:59.293870 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:48:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:49:05.692983 systemd[1]: Unnecessary job was removed for dev-sda6.device. Mar 17 18:48:59.321213 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:48:59Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:49:05.695588 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 18:48:59.321256 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:48:59Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:49:05.695916 systemd[1]: systemd-journald.service: Consumed 3.098s CPU time. Mar 17 18:48:59.321310 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:48:59Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 17 18:48:59.321329 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:48:59Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 17 18:48:59.321387 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:48:59Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 17 18:48:59.321402 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:48:59Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 17 18:48:59.321599 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:48:59Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 17 18:48:59.321630 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:48:59Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:48:59.321642 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:48:59Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:48:59.321984 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:48:59Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 17 18:48:59.322034 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:48:59Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 17 18:48:59.322053 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:48:59Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 17 18:48:59.322067 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:48:59Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 17 18:48:59.322101 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:48:59Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 17 18:48:59.322114 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:48:59Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 17 18:49:04.488662 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:49:04Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:49:04.488924 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:49:04Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:49:04.489024 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:49:04Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:49:04.489210 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:49:04Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:49:04.489261 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:49:04Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 17 18:49:04.489315 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-03-17T18:49:04Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 17 18:49:06.428295 systemd[1]: Started systemd-journald.service. Mar 17 18:49:06.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.429068 systemd[1]: Mounted media.mount. Mar 17 18:49:06.432861 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:49:06.437271 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:49:06.441626 systemd[1]: Mounted tmp.mount. Mar 17 18:49:06.445260 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:49:06.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.450157 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:49:06.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.455000 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:49:06.455136 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:49:06.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.459777 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:49:06.459898 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:49:06.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.465065 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:49:06.465369 systemd[1]: Finished modprobe@drm.service. Mar 17 18:49:06.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.469809 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:49:06.469929 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:49:06.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.474966 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:49:06.475246 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:49:06.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.480015 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:49:06.480156 systemd[1]: Finished modprobe@loop.service. Mar 17 18:49:06.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.484762 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:49:06.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.490017 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:49:06.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.495305 systemd[1]: Reached target network-pre.target. Mar 17 18:49:06.501180 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:49:06.506681 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:49:06.510566 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:49:06.525606 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:49:06.531056 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:49:06.535370 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:49:06.536551 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:49:06.540782 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:49:06.541965 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:49:06.547794 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:49:06.552592 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:49:06.571400 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:49:06.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.577816 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:49:06.608402 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:49:06.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.614717 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:49:06.628753 udevadm[1217]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 18:49:06.670727 systemd-journald[1186]: Time spent on flushing to /var/log/journal/cfb024412a72469b94372680c77f5e5f is 14.009ms for 1100 entries. Mar 17 18:49:06.670727 systemd-journald[1186]: System Journal (/var/log/journal/cfb024412a72469b94372680c77f5e5f) is 8.0M, max 2.6G, 2.6G free. Mar 17 18:49:06.920588 systemd-journald[1186]: Received client request to flush runtime journal. Mar 17 18:49:06.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:06.702140 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:49:06.706895 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:49:06.782836 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:49:06.921519 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:49:06.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:07.492567 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:49:07.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:07.498551 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:49:07.955441 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:49:07.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:08.413389 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:49:08.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:08.418000 audit: BPF prog-id=18 op=LOAD Mar 17 18:49:08.418000 audit: BPF prog-id=19 op=LOAD Mar 17 18:49:08.418000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:49:08.418000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:49:08.419437 systemd[1]: Starting systemd-udevd.service... Mar 17 18:49:08.437530 systemd-udevd[1222]: Using default interface naming scheme 'v252'. Mar 17 18:49:08.677637 systemd[1]: Started systemd-udevd.service. Mar 17 18:49:08.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:08.688000 audit: BPF prog-id=20 op=LOAD Mar 17 18:49:08.689379 systemd[1]: Starting systemd-networkd.service... Mar 17 18:49:08.726000 audit: BPF prog-id=21 op=LOAD Mar 17 18:49:08.726000 audit: BPF prog-id=22 op=LOAD Mar 17 18:49:08.726000 audit: BPF prog-id=23 op=LOAD Mar 17 18:49:08.727168 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:49:08.740701 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Mar 17 18:49:08.775474 systemd[1]: Started systemd-userdbd.service. Mar 17 18:49:08.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:08.797109 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 18:49:08.802000 audit[1230]: AVC avc: denied { confidentiality } for pid=1230 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 18:49:08.815444 kernel: hv_vmbus: registering driver hv_balloon Mar 17 18:49:08.815496 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 17 18:49:08.815512 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 17 18:49:08.802000 audit[1230]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaab12904b00 a1=aa2c a2=ffffaf5424b0 a3=aaab12863010 items=12 ppid=1222 pid=1230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:49:08.802000 audit: CWD cwd="/" Mar 17 18:49:08.802000 audit: PATH item=0 name=(null) inode=6919 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:08.802000 audit: PATH item=1 name=(null) inode=10712 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:08.802000 audit: PATH item=2 name=(null) inode=10712 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:08.802000 audit: PATH item=3 name=(null) inode=10713 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:08.802000 audit: PATH item=4 name=(null) inode=10712 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:08.802000 audit: PATH item=5 name=(null) inode=10714 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:08.802000 audit: PATH item=6 name=(null) inode=10712 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:08.802000 audit: PATH item=7 name=(null) inode=10715 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:08.802000 audit: PATH item=8 name=(null) inode=10712 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:08.802000 audit: PATH item=9 name=(null) inode=10716 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:08.802000 audit: PATH item=10 name=(null) inode=10712 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:08.802000 audit: PATH item=11 name=(null) inode=10717 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:49:08.802000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 18:49:08.836117 kernel: hv_vmbus: registering driver hyperv_fb Mar 17 18:49:08.850331 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 17 18:49:08.850384 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 17 18:49:08.855613 kernel: Console: switching to colour dummy device 80x25 Mar 17 18:49:08.858111 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 18:49:08.902135 kernel: hv_utils: Registering HyperV Utility Driver Mar 17 18:49:08.902369 kernel: hv_vmbus: registering driver hv_utils Mar 17 18:49:08.906774 kernel: hv_utils: Heartbeat IC version 3.0 Mar 17 18:49:08.906912 kernel: hv_utils: Shutdown IC version 3.2 Mar 17 18:49:08.910296 kernel: hv_utils: TimeSync IC version 4.0 Mar 17 18:49:14.721439 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:49:14.728154 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:49:14.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:14.734922 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:49:14.736731 kernel: kauditd_printk_skb: 64 callbacks suppressed Mar 17 18:49:14.736799 kernel: audit: type=1130 audit(1742237354.732:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:14.788946 systemd-networkd[1243]: lo: Link UP Mar 17 18:49:14.789237 systemd-networkd[1243]: lo: Gained carrier Mar 17 18:49:14.789730 systemd-networkd[1243]: Enumeration completed Mar 17 18:49:14.789928 systemd[1]: Started systemd-networkd.service. Mar 17 18:49:14.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:14.796031 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:49:14.815740 kernel: audit: type=1130 audit(1742237354.794:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:14.885803 systemd-networkd[1243]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:49:14.934652 kernel: mlx5_core ab97:00:02.0 enP43927s1: Link up Mar 17 18:49:14.960648 kernel: hv_netvsc 002248bc-60e7-0022-48bc-60e7002248bc eth0: Data path switched to VF: enP43927s1 Mar 17 18:49:14.960800 systemd-networkd[1243]: enP43927s1: Link UP Mar 17 18:49:14.960884 systemd-networkd[1243]: eth0: Link UP Mar 17 18:49:14.960888 systemd-networkd[1243]: eth0: Gained carrier Mar 17 18:49:14.966006 systemd-networkd[1243]: enP43927s1: Gained carrier Mar 17 18:49:14.975737 systemd-networkd[1243]: eth0: DHCPv4 address 10.200.20.41/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 18:49:15.074014 lvm[1298]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:49:15.111583 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:49:15.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:15.121805 systemd[1]: Reached target cryptsetup.target. Mar 17 18:49:15.137652 kernel: audit: type=1130 audit(1742237355.115:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:15.138509 systemd[1]: Starting lvm2-activation.service... Mar 17 18:49:15.142883 lvm[1300]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:49:15.169717 systemd[1]: Finished lvm2-activation.service. Mar 17 18:49:15.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:15.174334 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:49:15.196050 kernel: audit: type=1130 audit(1742237355.173:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:15.196587 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:49:15.196765 systemd[1]: Reached target local-fs.target. Mar 17 18:49:15.201359 systemd[1]: Reached target machines.target. Mar 17 18:49:15.207147 systemd[1]: Starting ldconfig.service... Mar 17 18:49:15.211597 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:49:15.211843 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:49:15.213198 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:49:15.218860 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:49:15.225669 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:49:15.231989 systemd[1]: Starting systemd-sysext.service... Mar 17 18:49:15.252174 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1302 (bootctl) Mar 17 18:49:15.253503 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:49:15.438166 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:49:15.594020 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:49:15.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:15.618683 kernel: audit: type=1130 audit(1742237355.599:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:15.761794 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:49:15.761986 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:49:15.902665 kernel: loop0: detected capacity change from 0 to 201592 Mar 17 18:49:15.990652 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:49:16.010654 kernel: loop1: detected capacity change from 0 to 201592 Mar 17 18:49:16.015450 (sd-sysext)[1314]: Using extensions 'kubernetes'. Mar 17 18:49:16.016606 (sd-sysext)[1314]: Merged extensions into '/usr'. Mar 17 18:49:16.033121 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:49:16.037873 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:49:16.039315 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:49:16.044578 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:49:16.050545 systemd[1]: Starting modprobe@loop.service... Mar 17 18:49:16.054771 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:49:16.054913 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:49:16.057254 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:49:16.061679 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:49:16.061814 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:49:16.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.067287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:49:16.067400 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:49:16.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.106409 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:49:16.106700 systemd[1]: Finished modprobe@loop.service. Mar 17 18:49:16.106994 kernel: audit: type=1130 audit(1742237356.066:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.107040 kernel: audit: type=1131 audit(1742237356.066:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.107060 kernel: audit: type=1130 audit(1742237356.105:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.107080 kernel: audit: type=1131 audit(1742237356.105:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.147046 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:49:16.147163 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:49:16.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.148385 systemd[1]: Finished systemd-sysext.service. Mar 17 18:49:16.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.168319 kernel: audit: type=1130 audit(1742237356.145:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.168412 systemd-fsck[1310]: fsck.fat 4.2 (2021-01-31) Mar 17 18:49:16.168412 systemd-fsck[1310]: /dev/sda1: 236 files, 117179/258078 clusters Mar 17 18:49:16.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.172291 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:49:16.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.180369 systemd[1]: Mounting boot.mount... Mar 17 18:49:16.185914 systemd[1]: Starting ensure-sysext.service... Mar 17 18:49:16.191370 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:49:16.202213 systemd[1]: Reloading. Mar 17 18:49:16.259786 /usr/lib/systemd/system-generators/torcx-generator[1344]: time="2025-03-17T18:49:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:49:16.259815 /usr/lib/systemd/system-generators/torcx-generator[1344]: time="2025-03-17T18:49:16Z" level=info msg="torcx already run" Mar 17 18:49:16.340290 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:49:16.340313 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:49:16.355845 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:49:16.417000 audit: BPF prog-id=24 op=LOAD Mar 17 18:49:16.417000 audit: BPF prog-id=20 op=UNLOAD Mar 17 18:49:16.418000 audit: BPF prog-id=25 op=LOAD Mar 17 18:49:16.418000 audit: BPF prog-id=15 op=UNLOAD Mar 17 18:49:16.418000 audit: BPF prog-id=26 op=LOAD Mar 17 18:49:16.418000 audit: BPF prog-id=27 op=LOAD Mar 17 18:49:16.418000 audit: BPF prog-id=16 op=UNLOAD Mar 17 18:49:16.418000 audit: BPF prog-id=17 op=UNLOAD Mar 17 18:49:16.419000 audit: BPF prog-id=28 op=LOAD Mar 17 18:49:16.419000 audit: BPF prog-id=29 op=LOAD Mar 17 18:49:16.419000 audit: BPF prog-id=18 op=UNLOAD Mar 17 18:49:16.419000 audit: BPF prog-id=19 op=UNLOAD Mar 17 18:49:16.420000 audit: BPF prog-id=30 op=LOAD Mar 17 18:49:16.420000 audit: BPF prog-id=21 op=UNLOAD Mar 17 18:49:16.420000 audit: BPF prog-id=31 op=LOAD Mar 17 18:49:16.420000 audit: BPF prog-id=32 op=LOAD Mar 17 18:49:16.420000 audit: BPF prog-id=22 op=UNLOAD Mar 17 18:49:16.420000 audit: BPF prog-id=23 op=UNLOAD Mar 17 18:49:16.433228 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:49:16.434739 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:49:16.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:16.439986 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:49:16.446026 systemd[1]: Starting modprobe@loop.service... Mar 17 18:49:16.450019 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:49:16.450147 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:49:16.450879 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:49:16.451015 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:49:16.452601 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:49:16.455985 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:49:16.456105 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:49:16.461600 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:49:16.461748 systemd[1]: Finished modprobe@loop.service. Mar 17 18:49:16.466725 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:49:16.466814 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:49:16.468203 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:49:16.469567 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:49:16.475253 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:49:16.480802 systemd[1]: Starting modprobe@loop.service... Mar 17 18:49:16.484586 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:49:16.484727 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:49:16.487279 systemd[1]: Mounted boot.mount. Mar 17 18:49:16.493992 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:49:16.494129 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:49:16.499187 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:49:16.504169 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:49:16.504294 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:49:16.509560 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:49:16.509843 systemd[1]: Finished modprobe@loop.service. Mar 17 18:49:16.514487 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:49:16.514579 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:49:16.516845 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:49:16.518195 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:49:16.523701 systemd[1]: Starting modprobe@drm.service... Mar 17 18:49:16.528503 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:49:16.533907 systemd[1]: Starting modprobe@loop.service... Mar 17 18:49:16.537797 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:49:16.537931 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:49:16.538782 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:49:16.538917 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:49:16.543640 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:49:16.543763 systemd[1]: Finished modprobe@drm.service. Mar 17 18:49:16.548291 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:49:16.548407 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:49:16.554211 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:49:16.554326 systemd[1]: Finished modprobe@loop.service. Mar 17 18:49:16.559168 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:49:16.559249 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:49:16.560348 systemd[1]: Finished ensure-sysext.service. Mar 17 18:49:16.675699 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:49:16.751816 systemd-networkd[1243]: eth0: Gained IPv6LL Mar 17 18:49:16.756623 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:49:16.790181 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:49:20.194426 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:49:20.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:20.203352 systemd[1]: Starting audit-rules.service... Mar 17 18:49:20.205648 kernel: kauditd_printk_skb: 44 callbacks suppressed Mar 17 18:49:20.205714 kernel: audit: type=1130 audit(1742237360.200:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:20.229262 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:49:20.236785 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:49:20.242000 audit: BPF prog-id=33 op=LOAD Mar 17 18:49:20.245140 systemd[1]: Starting systemd-resolved.service... Mar 17 18:49:20.253318 kernel: audit: type=1334 audit(1742237360.242:203): prog-id=33 op=LOAD Mar 17 18:49:20.253000 audit: BPF prog-id=34 op=LOAD Mar 17 18:49:20.255453 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:49:20.263947 kernel: audit: type=1334 audit(1742237360.253:204): prog-id=34 op=LOAD Mar 17 18:49:20.265602 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:49:20.584000 audit[1421]: SYSTEM_BOOT pid=1421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:49:20.588140 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:49:20.610654 kernel: audit: type=1127 audit(1742237360.584:205): pid=1421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:49:20.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:20.630644 kernel: audit: type=1130 audit(1742237360.610:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:20.692788 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:49:20.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:20.697733 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:49:20.715649 kernel: audit: type=1130 audit(1742237360.696:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:20.741986 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:49:20.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:20.747009 systemd[1]: Reached target time-set.target. Mar 17 18:49:20.765646 kernel: audit: type=1130 audit(1742237360.746:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:20.868967 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:49:20.869514 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:49:20.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:20.893722 kernel: audit: type=1130 audit(1742237360.873:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:20.928862 systemd-resolved[1419]: Positive Trust Anchors: Mar 17 18:49:20.928873 systemd-resolved[1419]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:49:20.928899 systemd-resolved[1419]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:49:21.133103 systemd-resolved[1419]: Using system hostname 'ci-3510.3.7-a-2552a29e1b'. Mar 17 18:49:21.134861 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:49:21.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.140273 systemd[1]: Started systemd-resolved.service. Mar 17 18:49:21.163844 kernel: audit: type=1130 audit(1742237361.139:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.164413 systemd[1]: Reached target network.target. Mar 17 18:49:21.186165 kernel: audit: type=1130 audit(1742237361.163:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:49:21.186753 systemd[1]: Reached target network-online.target. Mar 17 18:49:21.191447 systemd[1]: Reached target nss-lookup.target. Mar 17 18:49:21.553139 augenrules[1436]: No rules Mar 17 18:49:21.552000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:49:21.552000 audit[1436]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe04f00e0 a2=420 a3=0 items=0 ppid=1415 pid=1436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:49:21.552000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:49:21.554163 systemd[1]: Finished audit-rules.service. Mar 17 18:49:21.663783 systemd-timesyncd[1420]: Contacted time server 75.67.85.23:123 (0.flatcar.pool.ntp.org). Mar 17 18:49:21.664183 systemd-timesyncd[1420]: Initial clock synchronization to Mon 2025-03-17 18:49:21.659323 UTC. Mar 17 18:49:28.357569 ldconfig[1301]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:49:28.376219 systemd[1]: Finished ldconfig.service. Mar 17 18:49:28.382046 systemd[1]: Starting systemd-update-done.service... Mar 17 18:49:28.404215 systemd[1]: Finished systemd-update-done.service. Mar 17 18:49:28.409005 systemd[1]: Reached target sysinit.target. Mar 17 18:49:28.413489 systemd[1]: Started motdgen.path. Mar 17 18:49:28.417305 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:49:28.423375 systemd[1]: Started logrotate.timer. Mar 17 18:49:28.427571 systemd[1]: Started mdadm.timer. Mar 17 18:49:28.431063 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:49:28.435577 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:49:28.435617 systemd[1]: Reached target paths.target. Mar 17 18:49:28.439519 systemd[1]: Reached target timers.target. Mar 17 18:49:28.444431 systemd[1]: Listening on dbus.socket. Mar 17 18:49:28.449354 systemd[1]: Starting docker.socket... Mar 17 18:49:28.455514 systemd[1]: Listening on sshd.socket. Mar 17 18:49:28.459950 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:49:28.460475 systemd[1]: Listening on docker.socket. Mar 17 18:49:28.465009 systemd[1]: Reached target sockets.target. Mar 17 18:49:28.469189 systemd[1]: Reached target basic.target. Mar 17 18:49:28.473304 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:49:28.473336 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:49:28.474468 systemd[1]: Starting containerd.service... Mar 17 18:49:28.479322 systemd[1]: Starting dbus.service... Mar 17 18:49:28.483591 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:49:28.488966 systemd[1]: Starting extend-filesystems.service... Mar 17 18:49:28.493173 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:49:28.494504 systemd[1]: Starting kubelet.service... Mar 17 18:49:28.499171 systemd[1]: Starting motdgen.service... Mar 17 18:49:28.503618 systemd[1]: Started nvidia.service. Mar 17 18:49:28.509311 systemd[1]: Starting prepare-helm.service... Mar 17 18:49:28.514128 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:49:28.519352 systemd[1]: Starting sshd-keygen.service... Mar 17 18:49:28.525313 systemd[1]: Starting systemd-logind.service... Mar 17 18:49:28.529388 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:49:28.529458 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:49:28.529942 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 18:49:28.531231 systemd[1]: Starting update-engine.service... Mar 17 18:49:28.538614 jq[1446]: false Mar 17 18:49:28.539184 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:49:28.547974 jq[1464]: true Mar 17 18:49:28.550171 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:49:28.550340 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:49:28.630789 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:49:28.630954 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:49:28.684042 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:49:28.684219 systemd[1]: Finished motdgen.service. Mar 17 18:49:28.838988 extend-filesystems[1447]: Found loop1 Mar 17 18:49:28.844518 extend-filesystems[1447]: Found sda Mar 17 18:49:28.844518 extend-filesystems[1447]: Found sda1 Mar 17 18:49:28.844518 extend-filesystems[1447]: Found sda2 Mar 17 18:49:28.844518 extend-filesystems[1447]: Found sda3 Mar 17 18:49:28.844518 extend-filesystems[1447]: Found usr Mar 17 18:49:28.844518 extend-filesystems[1447]: Found sda4 Mar 17 18:49:28.844518 extend-filesystems[1447]: Found sda6 Mar 17 18:49:28.844518 extend-filesystems[1447]: Found sda7 Mar 17 18:49:28.844518 extend-filesystems[1447]: Found sda9 Mar 17 18:49:28.844518 extend-filesystems[1447]: Checking size of /dev/sda9 Mar 17 18:49:28.884731 jq[1469]: true Mar 17 18:49:28.888451 systemd-logind[1461]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Mar 17 18:49:28.893910 systemd-logind[1461]: New seat seat0. Mar 17 18:49:28.947770 env[1471]: time="2025-03-17T18:49:28.947720820Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:49:28.978439 env[1471]: time="2025-03-17T18:49:28.978391036Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:49:28.979390 env[1471]: time="2025-03-17T18:49:28.979363299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:49:28.982181 env[1471]: time="2025-03-17T18:49:28.982140993Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:49:28.982299 env[1471]: time="2025-03-17T18:49:28.982283927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:49:28.982607 env[1471]: time="2025-03-17T18:49:28.982581353Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:49:28.983459 env[1471]: time="2025-03-17T18:49:28.983437397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:49:28.983553 env[1471]: time="2025-03-17T18:49:28.983537179Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:49:28.983615 env[1471]: time="2025-03-17T18:49:28.983601608Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:49:28.983808 env[1471]: time="2025-03-17T18:49:28.983780055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:49:28.984040 env[1471]: time="2025-03-17T18:49:28.984013852Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:49:28.984216 env[1471]: time="2025-03-17T18:49:28.984189141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:49:28.984216 env[1471]: time="2025-03-17T18:49:28.984212416Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:49:28.984286 env[1471]: time="2025-03-17T18:49:28.984265727Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:49:28.984286 env[1471]: time="2025-03-17T18:49:28.984283123Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:49:29.186352 tar[1467]: linux-arm64/LICENSE Mar 17 18:49:29.186352 tar[1467]: linux-arm64/helm Mar 17 18:49:29.351391 extend-filesystems[1447]: Old size kept for /dev/sda9 Mar 17 18:49:29.364064 extend-filesystems[1447]: Found sr0 Mar 17 18:49:29.355968 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:49:29.356128 systemd[1]: Finished extend-filesystems.service. Mar 17 18:49:29.471318 env[1471]: time="2025-03-17T18:49:29.469008590Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:49:29.471318 env[1471]: time="2025-03-17T18:49:29.469062221Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:49:29.471318 env[1471]: time="2025-03-17T18:49:29.469079658Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:49:29.471318 env[1471]: time="2025-03-17T18:49:29.469119931Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:49:29.471318 env[1471]: time="2025-03-17T18:49:29.469136169Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:49:29.471318 env[1471]: time="2025-03-17T18:49:29.469150646Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:49:29.471318 env[1471]: time="2025-03-17T18:49:29.469163644Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:49:29.471318 env[1471]: time="2025-03-17T18:49:29.469510745Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:49:29.471318 env[1471]: time="2025-03-17T18:49:29.469528782Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:49:29.471318 env[1471]: time="2025-03-17T18:49:29.469542219Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:49:29.471318 env[1471]: time="2025-03-17T18:49:29.469556097Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:49:29.471318 env[1471]: time="2025-03-17T18:49:29.469569855Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:49:29.471318 env[1471]: time="2025-03-17T18:49:29.469750984Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:49:29.471318 env[1471]: time="2025-03-17T18:49:29.469825331Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:49:29.471704 systemd[1]: Started containerd.service. Mar 17 18:49:29.476107 env[1471]: time="2025-03-17T18:49:29.470046773Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:49:29.476107 env[1471]: time="2025-03-17T18:49:29.470071409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:49:29.476107 env[1471]: time="2025-03-17T18:49:29.470083927Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:49:29.476107 env[1471]: time="2025-03-17T18:49:29.470129119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:49:29.476107 env[1471]: time="2025-03-17T18:49:29.470142477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:49:29.476107 env[1471]: time="2025-03-17T18:49:29.470155955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:49:29.476107 env[1471]: time="2025-03-17T18:49:29.470167273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:49:29.476107 env[1471]: time="2025-03-17T18:49:29.470179071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:49:29.476107 env[1471]: time="2025-03-17T18:49:29.470191708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:49:29.476107 env[1471]: time="2025-03-17T18:49:29.470202667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:49:29.476107 env[1471]: time="2025-03-17T18:49:29.470213825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:49:29.476107 env[1471]: time="2025-03-17T18:49:29.470227942Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:49:29.476107 env[1471]: time="2025-03-17T18:49:29.470345002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:49:29.476107 env[1471]: time="2025-03-17T18:49:29.470361439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:49:29.476107 env[1471]: time="2025-03-17T18:49:29.470375317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:49:29.476395 env[1471]: time="2025-03-17T18:49:29.470387195Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:49:29.476395 env[1471]: time="2025-03-17T18:49:29.470401713Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:49:29.476395 env[1471]: time="2025-03-17T18:49:29.470413031Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:49:29.476395 env[1471]: time="2025-03-17T18:49:29.470430788Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:49:29.476395 env[1471]: time="2025-03-17T18:49:29.470465662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:49:29.476497 env[1471]: time="2025-03-17T18:49:29.470689064Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:49:29.476497 env[1471]: time="2025-03-17T18:49:29.470746094Z" level=info msg="Connect containerd service" Mar 17 18:49:29.476497 env[1471]: time="2025-03-17T18:49:29.470782088Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:49:29.476497 env[1471]: time="2025-03-17T18:49:29.471312037Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:49:29.476497 env[1471]: time="2025-03-17T18:49:29.471536359Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:49:29.476497 env[1471]: time="2025-03-17T18:49:29.471571713Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:49:29.476497 env[1471]: time="2025-03-17T18:49:29.471620705Z" level=info msg="containerd successfully booted in 0.524788s" Mar 17 18:49:29.476497 env[1471]: time="2025-03-17T18:49:29.472682283Z" level=info msg="Start subscribing containerd event" Mar 17 18:49:29.476497 env[1471]: time="2025-03-17T18:49:29.472725556Z" level=info msg="Start recovering state" Mar 17 18:49:29.476497 env[1471]: time="2025-03-17T18:49:29.472815941Z" level=info msg="Start event monitor" Mar 17 18:49:29.476497 env[1471]: time="2025-03-17T18:49:29.472847535Z" level=info msg="Start snapshots syncer" Mar 17 18:49:29.476497 env[1471]: time="2025-03-17T18:49:29.472863252Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:49:29.476497 env[1471]: time="2025-03-17T18:49:29.472870531Z" level=info msg="Start streaming server" Mar 17 18:49:29.485891 systemd[1]: Started kubelet.service. Mar 17 18:49:29.492092 dbus-daemon[1445]: [system] SELinux support is enabled Mar 17 18:49:29.492239 systemd[1]: Started dbus.service. Mar 17 18:49:29.512584 dbus-daemon[1445]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 18:49:29.497583 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:49:29.497604 systemd[1]: Reached target system-config.target. Mar 17 18:49:29.505739 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:49:29.505758 systemd[1]: Reached target user-config.target. Mar 17 18:49:29.512756 systemd[1]: Started systemd-logind.service. Mar 17 18:49:29.552371 systemd[1]: nvidia.service: Deactivated successfully. Mar 17 18:49:29.629651 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:49:29.630914 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:49:30.092801 kubelet[1548]: E0317 18:49:30.092726 1548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:30.094535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:30.094668 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:30.161590 tar[1467]: linux-arm64/README.md Mar 17 18:49:30.166240 systemd[1]: Finished prepare-helm.service. Mar 17 18:49:30.436671 update_engine[1463]: I0317 18:49:30.423862 1463 main.cc:92] Flatcar Update Engine starting Mar 17 18:49:30.482753 systemd[1]: Started update-engine.service. Mar 17 18:49:30.489011 systemd[1]: Started locksmithd.service. Mar 17 18:49:30.493625 update_engine[1463]: I0317 18:49:30.493593 1463 update_check_scheduler.cc:74] Next update check in 10m22s Mar 17 18:49:31.181593 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:49:31.199596 systemd[1]: Finished sshd-keygen.service. Mar 17 18:49:31.205377 systemd[1]: Starting issuegen.service... Mar 17 18:49:31.210038 systemd[1]: Started waagent.service. Mar 17 18:49:31.214610 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:49:31.214792 systemd[1]: Finished issuegen.service. Mar 17 18:49:31.220246 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:49:31.251841 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:49:31.258129 systemd[1]: Started getty@tty1.service. Mar 17 18:49:31.263712 systemd[1]: Started serial-getty@ttyAMA0.service. Mar 17 18:49:31.268430 systemd[1]: Reached target getty.target. Mar 17 18:49:31.272425 systemd[1]: Reached target multi-user.target. Mar 17 18:49:31.278407 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:49:31.291529 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:49:31.291713 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:49:31.297142 systemd[1]: Startup finished in 747ms (kernel) + 15.201s (initrd) + 35.491s (userspace) = 51.440s. Mar 17 18:49:32.082977 locksmithd[1558]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:49:32.201778 login[1577]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Mar 17 18:49:32.202683 login[1576]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 18:49:32.297220 systemd[1]: Created slice user-500.slice. Mar 17 18:49:32.298313 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:49:32.300543 systemd-logind[1461]: New session 1 of user core. Mar 17 18:49:32.340000 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:49:32.341466 systemd[1]: Starting user@500.service... Mar 17 18:49:32.357823 (systemd)[1580]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:32.533129 systemd[1580]: Queued start job for default target default.target. Mar 17 18:49:32.533620 systemd[1580]: Reached target paths.target. Mar 17 18:49:32.533654 systemd[1580]: Reached target sockets.target. Mar 17 18:49:32.533666 systemd[1580]: Reached target timers.target. Mar 17 18:49:32.533675 systemd[1580]: Reached target basic.target. Mar 17 18:49:32.533773 systemd[1]: Started user@500.service. Mar 17 18:49:32.534580 systemd[1]: Started session-1.scope. Mar 17 18:49:32.534961 systemd[1580]: Reached target default.target. Mar 17 18:49:32.535110 systemd[1580]: Startup finished in 171ms. Mar 17 18:49:33.203293 login[1577]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 18:49:33.207336 systemd-logind[1461]: New session 2 of user core. Mar 17 18:49:33.207739 systemd[1]: Started session-2.scope. Mar 17 18:49:40.317242 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:49:40.317406 systemd[1]: Stopped kubelet.service. Mar 17 18:49:40.318751 systemd[1]: Starting kubelet.service... Mar 17 18:49:42.394885 systemd[1]: Started kubelet.service. Mar 17 18:49:42.450560 kubelet[1605]: E0317 18:49:42.450520 1605 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:42.453313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:42.453434 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:44.072874 waagent[1572]: 2025-03-17T18:49:44.072762Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Mar 17 18:49:44.106507 waagent[1572]: 2025-03-17T18:49:44.106413Z INFO Daemon Daemon OS: flatcar 3510.3.7 Mar 17 18:49:44.111170 waagent[1572]: 2025-03-17T18:49:44.111086Z INFO Daemon Daemon Python: 3.9.16 Mar 17 18:49:44.116839 waagent[1572]: 2025-03-17T18:49:44.116735Z INFO Daemon Daemon Run daemon Mar 17 18:49:44.121097 waagent[1572]: 2025-03-17T18:49:44.121010Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.7' Mar 17 18:49:44.138155 waagent[1572]: 2025-03-17T18:49:44.138009Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Mar 17 18:49:44.152833 waagent[1572]: 2025-03-17T18:49:44.152696Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 17 18:49:44.162277 waagent[1572]: 2025-03-17T18:49:44.162194Z INFO Daemon Daemon cloud-init is enabled: False Mar 17 18:49:44.167295 waagent[1572]: 2025-03-17T18:49:44.167221Z INFO Daemon Daemon Using waagent for provisioning Mar 17 18:49:44.173143 waagent[1572]: 2025-03-17T18:49:44.173072Z INFO Daemon Daemon Activate resource disk Mar 17 18:49:44.177889 waagent[1572]: 2025-03-17T18:49:44.177812Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 17 18:49:44.192341 waagent[1572]: 2025-03-17T18:49:44.192255Z INFO Daemon Daemon Found device: None Mar 17 18:49:44.196915 waagent[1572]: 2025-03-17T18:49:44.196840Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 17 18:49:44.205289 waagent[1572]: 2025-03-17T18:49:44.205212Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 17 18:49:44.217144 waagent[1572]: 2025-03-17T18:49:44.217075Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 18:49:44.222926 waagent[1572]: 2025-03-17T18:49:44.222846Z INFO Daemon Daemon Running default provisioning handler Mar 17 18:49:44.236867 waagent[1572]: 2025-03-17T18:49:44.236707Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Mar 17 18:49:44.251731 waagent[1572]: 2025-03-17T18:49:44.251568Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 17 18:49:44.261590 waagent[1572]: 2025-03-17T18:49:44.261505Z INFO Daemon Daemon cloud-init is enabled: False Mar 17 18:49:44.266662 waagent[1572]: 2025-03-17T18:49:44.266568Z INFO Daemon Daemon Copying ovf-env.xml Mar 17 18:49:44.364040 waagent[1572]: 2025-03-17T18:49:44.362838Z INFO Daemon Daemon Successfully mounted dvd Mar 17 18:49:44.466496 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 17 18:49:44.514069 waagent[1572]: 2025-03-17T18:49:44.513916Z INFO Daemon Daemon Detect protocol endpoint Mar 17 18:49:44.519245 waagent[1572]: 2025-03-17T18:49:44.519144Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 18:49:44.525307 waagent[1572]: 2025-03-17T18:49:44.525212Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 17 18:49:44.532239 waagent[1572]: 2025-03-17T18:49:44.532145Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 17 18:49:44.537867 waagent[1572]: 2025-03-17T18:49:44.537779Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 17 18:49:44.543267 waagent[1572]: 2025-03-17T18:49:44.543177Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 17 18:49:44.682277 waagent[1572]: 2025-03-17T18:49:44.682142Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 17 18:49:44.689805 waagent[1572]: 2025-03-17T18:49:44.689755Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 17 18:49:44.695273 waagent[1572]: 2025-03-17T18:49:44.695190Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 17 18:49:45.176548 waagent[1572]: 2025-03-17T18:49:45.176368Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 17 18:49:45.192996 waagent[1572]: 2025-03-17T18:49:45.192892Z INFO Daemon Daemon Forcing an update of the goal state.. Mar 17 18:49:45.198847 waagent[1572]: 2025-03-17T18:49:45.198759Z INFO Daemon Daemon Fetching goal state [incarnation 1] Mar 17 18:49:45.318417 waagent[1572]: 2025-03-17T18:49:45.318258Z INFO Daemon Daemon Found private key matching thumbprint C6E7F39F62FCBD1B584508ADE5C9E58A299CCD58 Mar 17 18:49:45.326995 waagent[1572]: 2025-03-17T18:49:45.326905Z INFO Daemon Daemon Certificate with thumbprint 3F0198BBBD713DE4488AFF9082D4243CE0A01D14 has no matching private key. Mar 17 18:49:45.336522 waagent[1572]: 2025-03-17T18:49:45.336439Z INFO Daemon Daemon Fetch goal state completed Mar 17 18:49:45.370923 waagent[1572]: 2025-03-17T18:49:45.370864Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: baecad7a-642f-4ec3-8384-3b6c631c3141 New eTag: 9578591451016606750] Mar 17 18:49:45.381762 waagent[1572]: 2025-03-17T18:49:45.381673Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Mar 17 18:49:45.398158 waagent[1572]: 2025-03-17T18:49:45.398091Z INFO Daemon Daemon Starting provisioning Mar 17 18:49:45.403193 waagent[1572]: 2025-03-17T18:49:45.403104Z INFO Daemon Daemon Handle ovf-env.xml. Mar 17 18:49:45.408034 waagent[1572]: 2025-03-17T18:49:45.407960Z INFO Daemon Daemon Set hostname [ci-3510.3.7-a-2552a29e1b] Mar 17 18:49:45.657464 waagent[1572]: 2025-03-17T18:49:45.657306Z INFO Daemon Daemon Publish hostname [ci-3510.3.7-a-2552a29e1b] Mar 17 18:49:45.664175 waagent[1572]: 2025-03-17T18:49:45.664080Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 17 18:49:45.670805 waagent[1572]: 2025-03-17T18:49:45.670723Z INFO Daemon Daemon Primary interface is [eth0] Mar 17 18:49:45.687947 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Mar 17 18:49:45.688130 systemd[1]: Stopped systemd-networkd-wait-online.service. Mar 17 18:49:45.688189 systemd[1]: Stopping systemd-networkd-wait-online.service... Mar 17 18:49:45.688437 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:49:45.693685 systemd-networkd[1243]: eth0: DHCPv6 lease lost Mar 17 18:49:45.695351 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:49:45.695528 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:49:45.697796 systemd[1]: Starting systemd-networkd.service... Mar 17 18:49:45.726480 systemd-networkd[1634]: enP43927s1: Link UP Mar 17 18:49:45.726495 systemd-networkd[1634]: enP43927s1: Gained carrier Mar 17 18:49:45.727503 systemd-networkd[1634]: eth0: Link UP Mar 17 18:49:45.727515 systemd-networkd[1634]: eth0: Gained carrier Mar 17 18:49:45.728019 systemd-networkd[1634]: lo: Link UP Mar 17 18:49:45.728029 systemd-networkd[1634]: lo: Gained carrier Mar 17 18:49:45.728272 systemd-networkd[1634]: eth0: Gained IPv6LL Mar 17 18:49:45.729392 systemd-networkd[1634]: Enumeration completed Mar 17 18:49:45.729551 systemd[1]: Started systemd-networkd.service. Mar 17 18:49:45.731123 systemd-networkd[1634]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:49:45.731335 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:49:45.739042 waagent[1572]: 2025-03-17T18:49:45.738871Z INFO Daemon Daemon Create user account if not exists Mar 17 18:49:45.745302 waagent[1572]: 2025-03-17T18:49:45.745208Z INFO Daemon Daemon User core already exists, skip useradd Mar 17 18:49:45.751139 waagent[1572]: 2025-03-17T18:49:45.751046Z INFO Daemon Daemon Configure sudoer Mar 17 18:49:45.756111 waagent[1572]: 2025-03-17T18:49:45.756025Z INFO Daemon Daemon Configure sshd Mar 17 18:49:45.760495 waagent[1572]: 2025-03-17T18:49:45.760406Z INFO Daemon Daemon Deploy ssh public key. Mar 17 18:49:45.760746 systemd-networkd[1634]: eth0: DHCPv4 address 10.200.20.41/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 18:49:45.766208 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:49:46.942156 waagent[1572]: 2025-03-17T18:49:46.942053Z INFO Daemon Daemon Provisioning complete Mar 17 18:49:46.963308 waagent[1572]: 2025-03-17T18:49:46.963235Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 17 18:49:46.969752 waagent[1572]: 2025-03-17T18:49:46.969662Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 17 18:49:46.980854 waagent[1572]: 2025-03-17T18:49:46.980771Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Mar 17 18:49:47.302986 waagent[1643]: 2025-03-17T18:49:47.302825Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Mar 17 18:49:47.304164 waagent[1643]: 2025-03-17T18:49:47.304090Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:49:47.304441 waagent[1643]: 2025-03-17T18:49:47.304391Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:49:47.318674 waagent[1643]: 2025-03-17T18:49:47.318558Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Mar 17 18:49:47.319064 waagent[1643]: 2025-03-17T18:49:47.319011Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Mar 17 18:49:47.394230 waagent[1643]: 2025-03-17T18:49:47.394083Z INFO ExtHandler ExtHandler Found private key matching thumbprint C6E7F39F62FCBD1B584508ADE5C9E58A299CCD58 Mar 17 18:49:47.394663 waagent[1643]: 2025-03-17T18:49:47.394586Z INFO ExtHandler ExtHandler Certificate with thumbprint 3F0198BBBD713DE4488AFF9082D4243CE0A01D14 has no matching private key. Mar 17 18:49:47.395005 waagent[1643]: 2025-03-17T18:49:47.394954Z INFO ExtHandler ExtHandler Fetch goal state completed Mar 17 18:49:47.416508 waagent[1643]: 2025-03-17T18:49:47.416447Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 667b7c8d-821e-4b62-aa48-4ace1f8d64d8 New eTag: 9578591451016606750] Mar 17 18:49:47.417364 waagent[1643]: 2025-03-17T18:49:47.417291Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Mar 17 18:49:47.503574 waagent[1643]: 2025-03-17T18:49:47.503424Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 17 18:49:47.528854 waagent[1643]: 2025-03-17T18:49:47.528756Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1643 Mar 17 18:49:47.533154 waagent[1643]: 2025-03-17T18:49:47.533056Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] Mar 17 18:49:47.534746 waagent[1643]: 2025-03-17T18:49:47.534658Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 17 18:49:47.648524 waagent[1643]: 2025-03-17T18:49:47.648462Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 17 18:49:47.649214 waagent[1643]: 2025-03-17T18:49:47.649150Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 17 18:49:47.657566 waagent[1643]: 2025-03-17T18:49:47.657506Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 17 18:49:47.658344 waagent[1643]: 2025-03-17T18:49:47.658276Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Mar 17 18:49:47.659754 waagent[1643]: 2025-03-17T18:49:47.659688Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Mar 17 18:49:47.661367 waagent[1643]: 2025-03-17T18:49:47.661290Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 17 18:49:47.661938 waagent[1643]: 2025-03-17T18:49:47.661864Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:49:47.662128 waagent[1643]: 2025-03-17T18:49:47.662070Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:49:47.662768 waagent[1643]: 2025-03-17T18:49:47.662698Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 17 18:49:47.663575 waagent[1643]: 2025-03-17T18:49:47.663497Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 17 18:49:47.663858 waagent[1643]: 2025-03-17T18:49:47.663782Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 17 18:49:47.663858 waagent[1643]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 17 18:49:47.663858 waagent[1643]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 17 18:49:47.663858 waagent[1643]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 17 18:49:47.663858 waagent[1643]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:49:47.663858 waagent[1643]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:49:47.663858 waagent[1643]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:49:47.664147 waagent[1643]: 2025-03-17T18:49:47.664077Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:49:47.666926 waagent[1643]: 2025-03-17T18:49:47.666730Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 17 18:49:47.667516 waagent[1643]: 2025-03-17T18:49:47.667445Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:49:47.668738 waagent[1643]: 2025-03-17T18:49:47.668625Z INFO EnvHandler ExtHandler Configure routes Mar 17 18:49:47.668843 waagent[1643]: 2025-03-17T18:49:47.668773Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 17 18:49:47.669425 waagent[1643]: 2025-03-17T18:49:47.669353Z INFO EnvHandler ExtHandler Gateway:None Mar 17 18:49:47.669689 waagent[1643]: 2025-03-17T18:49:47.669602Z INFO EnvHandler ExtHandler Routes:None Mar 17 18:49:47.670859 waagent[1643]: 2025-03-17T18:49:47.670793Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 17 18:49:47.670952 waagent[1643]: 2025-03-17T18:49:47.670885Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 17 18:49:47.671457 waagent[1643]: 2025-03-17T18:49:47.671388Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 17 18:49:47.683944 waagent[1643]: 2025-03-17T18:49:47.683865Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Mar 17 18:49:47.684660 waagent[1643]: 2025-03-17T18:49:47.684580Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Mar 17 18:49:47.685863 waagent[1643]: 2025-03-17T18:49:47.685799Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Mar 17 18:49:47.735021 waagent[1643]: 2025-03-17T18:49:47.734877Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1634' Mar 17 18:49:47.758777 waagent[1643]: 2025-03-17T18:49:47.758698Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Mar 17 18:49:47.842773 waagent[1643]: 2025-03-17T18:49:47.842611Z INFO MonitorHandler ExtHandler Network interfaces: Mar 17 18:49:47.842773 waagent[1643]: Executing ['ip', '-a', '-o', 'link']: Mar 17 18:49:47.842773 waagent[1643]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 17 18:49:47.842773 waagent[1643]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bc:60:e7 brd ff:ff:ff:ff:ff:ff Mar 17 18:49:47.842773 waagent[1643]: 3: enP43927s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bc:60:e7 brd ff:ff:ff:ff:ff:ff\ altname enP43927p0s2 Mar 17 18:49:47.842773 waagent[1643]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 17 18:49:47.842773 waagent[1643]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 17 18:49:47.842773 waagent[1643]: 2: eth0 inet 10.200.20.41/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 17 18:49:47.842773 waagent[1643]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 17 18:49:47.842773 waagent[1643]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Mar 17 18:49:47.842773 waagent[1643]: 2: eth0 inet6 fe80::222:48ff:febc:60e7/64 scope link \ valid_lft forever preferred_lft forever Mar 17 18:49:48.090876 waagent[1643]: 2025-03-17T18:49:48.090799Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Mar 17 18:49:48.985043 waagent[1572]: 2025-03-17T18:49:48.984917Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Mar 17 18:49:48.989794 waagent[1572]: 2025-03-17T18:49:48.989735Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Mar 17 18:49:50.303526 waagent[1681]: 2025-03-17T18:49:50.303429Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Mar 17 18:49:50.306215 waagent[1681]: 2025-03-17T18:49:50.306129Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.7 Mar 17 18:49:50.306496 waagent[1681]: 2025-03-17T18:49:50.306448Z INFO ExtHandler ExtHandler Python: 3.9.16 Mar 17 18:49:50.306732 waagent[1681]: 2025-03-17T18:49:50.306684Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Mar 17 18:49:50.315601 waagent[1681]: 2025-03-17T18:49:50.315461Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 17 18:49:50.316282 waagent[1681]: 2025-03-17T18:49:50.316219Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:49:50.316533 waagent[1681]: 2025-03-17T18:49:50.316486Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:49:50.331114 waagent[1681]: 2025-03-17T18:49:50.331010Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 17 18:49:50.349512 waagent[1681]: 2025-03-17T18:49:50.349445Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Mar 17 18:49:50.350953 waagent[1681]: 2025-03-17T18:49:50.350885Z INFO ExtHandler Mar 17 18:49:50.351275 waagent[1681]: 2025-03-17T18:49:50.351225Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 8ff4ec82-2a5f-4651-83f8-fcb184e52cdc eTag: 9578591451016606750 source: Fabric] Mar 17 18:49:50.352202 waagent[1681]: 2025-03-17T18:49:50.352144Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 17 18:49:50.353673 waagent[1681]: 2025-03-17T18:49:50.353583Z INFO ExtHandler Mar 17 18:49:50.353927 waagent[1681]: 2025-03-17T18:49:50.353879Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 17 18:49:50.362341 waagent[1681]: 2025-03-17T18:49:50.362283Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 17 18:49:50.363101 waagent[1681]: 2025-03-17T18:49:50.363051Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Mar 17 18:49:50.394478 waagent[1681]: 2025-03-17T18:49:50.394411Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Mar 17 18:49:50.476070 waagent[1681]: 2025-03-17T18:49:50.475921Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C6E7F39F62FCBD1B584508ADE5C9E58A299CCD58', 'hasPrivateKey': True} Mar 17 18:49:50.477476 waagent[1681]: 2025-03-17T18:49:50.477407Z INFO ExtHandler Downloaded certificate {'thumbprint': '3F0198BBBD713DE4488AFF9082D4243CE0A01D14', 'hasPrivateKey': False} Mar 17 18:49:50.478798 waagent[1681]: 2025-03-17T18:49:50.478731Z INFO ExtHandler Fetch goal state completed Mar 17 18:49:50.501526 waagent[1681]: 2025-03-17T18:49:50.501390Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Mar 17 18:49:50.515231 waagent[1681]: 2025-03-17T18:49:50.515122Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1681 Mar 17 18:49:50.518940 waagent[1681]: 2025-03-17T18:49:50.518843Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] Mar 17 18:49:50.520362 waagent[1681]: 2025-03-17T18:49:50.520294Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Mar 17 18:49:50.520870 waagent[1681]: 2025-03-17T18:49:50.520811Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Mar 17 18:49:50.523195 waagent[1681]: 2025-03-17T18:49:50.523122Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 17 18:49:50.529061 waagent[1681]: 2025-03-17T18:49:50.529000Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 17 18:49:50.529731 waagent[1681]: 2025-03-17T18:49:50.529663Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 17 18:49:50.538139 waagent[1681]: 2025-03-17T18:49:50.538078Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 17 18:49:50.538941 waagent[1681]: 2025-03-17T18:49:50.538873Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Mar 17 18:49:50.546358 waagent[1681]: 2025-03-17T18:49:50.546205Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 17 18:49:50.547845 waagent[1681]: 2025-03-17T18:49:50.547755Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Mar 17 18:49:50.549773 waagent[1681]: 2025-03-17T18:49:50.549688Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 17 18:49:50.550052 waagent[1681]: 2025-03-17T18:49:50.549979Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:49:50.550924 waagent[1681]: 2025-03-17T18:49:50.550849Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:49:50.551609 waagent[1681]: 2025-03-17T18:49:50.551537Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 17 18:49:50.551967 waagent[1681]: 2025-03-17T18:49:50.551904Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 17 18:49:50.551967 waagent[1681]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 17 18:49:50.551967 waagent[1681]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 17 18:49:50.551967 waagent[1681]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 17 18:49:50.551967 waagent[1681]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:49:50.551967 waagent[1681]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:49:50.551967 waagent[1681]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:49:50.554695 waagent[1681]: 2025-03-17T18:49:50.554486Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 17 18:49:50.555171 waagent[1681]: 2025-03-17T18:49:50.555091Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:49:50.555829 waagent[1681]: 2025-03-17T18:49:50.555756Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:49:50.559121 waagent[1681]: 2025-03-17T18:49:50.558877Z INFO EnvHandler ExtHandler Configure routes Mar 17 18:49:50.559544 waagent[1681]: 2025-03-17T18:49:50.559452Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 17 18:49:50.560018 waagent[1681]: 2025-03-17T18:49:50.559939Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 17 18:49:50.561232 waagent[1681]: 2025-03-17T18:49:50.561034Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 17 18:49:50.561439 waagent[1681]: 2025-03-17T18:49:50.561376Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 17 18:49:50.567904 waagent[1681]: 2025-03-17T18:49:50.567801Z INFO EnvHandler ExtHandler Gateway:None Mar 17 18:49:50.568238 waagent[1681]: 2025-03-17T18:49:50.568161Z INFO MonitorHandler ExtHandler Network interfaces: Mar 17 18:49:50.568238 waagent[1681]: Executing ['ip', '-a', '-o', 'link']: Mar 17 18:49:50.568238 waagent[1681]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 17 18:49:50.568238 waagent[1681]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bc:60:e7 brd ff:ff:ff:ff:ff:ff Mar 17 18:49:50.568238 waagent[1681]: 3: enP43927s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bc:60:e7 brd ff:ff:ff:ff:ff:ff\ altname enP43927p0s2 Mar 17 18:49:50.568238 waagent[1681]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 17 18:49:50.568238 waagent[1681]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 17 18:49:50.568238 waagent[1681]: 2: eth0 inet 10.200.20.41/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 17 18:49:50.568238 waagent[1681]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 17 18:49:50.568238 waagent[1681]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Mar 17 18:49:50.568238 waagent[1681]: 2: eth0 inet6 fe80::222:48ff:febc:60e7/64 scope link \ valid_lft forever preferred_lft forever Mar 17 18:49:50.571263 waagent[1681]: 2025-03-17T18:49:50.571060Z INFO EnvHandler ExtHandler Routes:None Mar 17 18:49:50.571724 waagent[1681]: 2025-03-17T18:49:50.571619Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 17 18:49:50.583061 waagent[1681]: 2025-03-17T18:49:50.582974Z INFO ExtHandler ExtHandler Downloading agent manifest Mar 17 18:49:50.601987 waagent[1681]: 2025-03-17T18:49:50.601906Z INFO ExtHandler ExtHandler Mar 17 18:49:50.602143 waagent[1681]: 2025-03-17T18:49:50.602088Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 66784ca9-ce61-4b17-938d-3687cd45f5c5 correlation 43a6769f-b4dc-45b4-b9b5-b52384738a5f created: 2025-03-17T18:47:55.643547Z] Mar 17 18:49:50.603113 waagent[1681]: 2025-03-17T18:49:50.603040Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 17 18:49:50.605272 waagent[1681]: 2025-03-17T18:49:50.605195Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Mar 17 18:49:50.639289 waagent[1681]: 2025-03-17T18:49:50.639207Z INFO ExtHandler ExtHandler Looking for existing remote access users. Mar 17 18:49:50.664524 waagent[1681]: 2025-03-17T18:49:50.664433Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: E5DE90CD-AEFD-4B45-9912-8845F9791C6C;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Mar 17 18:49:50.762473 waagent[1681]: 2025-03-17T18:49:50.762295Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Mar 17 18:49:50.762473 waagent[1681]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:49:50.762473 waagent[1681]: pkts bytes target prot opt in out source destination Mar 17 18:49:50.762473 waagent[1681]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:49:50.762473 waagent[1681]: pkts bytes target prot opt in out source destination Mar 17 18:49:50.762473 waagent[1681]: Chain OUTPUT (policy ACCEPT 5 packets, 453 bytes) Mar 17 18:49:50.762473 waagent[1681]: pkts bytes target prot opt in out source destination Mar 17 18:49:50.762473 waagent[1681]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 17 18:49:50.762473 waagent[1681]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 18:49:50.762473 waagent[1681]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 18:49:50.771189 waagent[1681]: 2025-03-17T18:49:50.771036Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 17 18:49:50.771189 waagent[1681]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:49:50.771189 waagent[1681]: pkts bytes target prot opt in out source destination Mar 17 18:49:50.771189 waagent[1681]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:49:50.771189 waagent[1681]: pkts bytes target prot opt in out source destination Mar 17 18:49:50.771189 waagent[1681]: Chain OUTPUT (policy ACCEPT 5 packets, 453 bytes) Mar 17 18:49:50.771189 waagent[1681]: pkts bytes target prot opt in out source destination Mar 17 18:49:50.771189 waagent[1681]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 17 18:49:50.771189 waagent[1681]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 18:49:50.771189 waagent[1681]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 18:49:50.771802 waagent[1681]: 2025-03-17T18:49:50.771747Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 17 18:49:52.567226 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:49:52.567392 systemd[1]: Stopped kubelet.service. Mar 17 18:49:52.568833 systemd[1]: Starting kubelet.service... Mar 17 18:49:52.692994 systemd[1]: Started kubelet.service. Mar 17 18:49:52.731143 kubelet[1733]: E0317 18:49:52.731087 1733 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:52.733360 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:52.733481 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:56.921011 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 17 18:50:02.817255 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 18:50:02.817435 systemd[1]: Stopped kubelet.service. Mar 17 18:50:02.818874 systemd[1]: Starting kubelet.service... Mar 17 18:50:02.989156 systemd[1]: Started kubelet.service. Mar 17 18:50:03.025239 kubelet[1744]: E0317 18:50:03.025160 1744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:03.027460 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:03.027582 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:13.067290 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 18:50:13.067457 systemd[1]: Stopped kubelet.service. Mar 17 18:50:13.068889 systemd[1]: Starting kubelet.service... Mar 17 18:50:13.483952 systemd[1]: Started kubelet.service. Mar 17 18:50:13.522360 kubelet[1756]: E0317 18:50:13.522303 1756 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:13.524550 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:13.524695 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:15.879753 update_engine[1463]: I0317 18:50:15.879699 1463 update_attempter.cc:509] Updating boot flags... Mar 17 18:50:23.567232 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 18:50:23.567405 systemd[1]: Stopped kubelet.service. Mar 17 18:50:23.568877 systemd[1]: Starting kubelet.service... Mar 17 18:50:23.681076 systemd[1]: Started kubelet.service. Mar 17 18:50:23.732647 kubelet[1831]: E0317 18:50:23.732599 1831 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:23.734761 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:23.734886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:32.707874 systemd[1]: Created slice system-sshd.slice. Mar 17 18:50:32.709386 systemd[1]: Started sshd@0-10.200.20.41:22-10.200.16.10:54548.service. Mar 17 18:50:33.309801 sshd[1837]: Accepted publickey for core from 10.200.16.10 port 54548 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:50:33.332011 sshd[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:50:33.336224 systemd[1]: Started session-3.scope. Mar 17 18:50:33.336693 systemd-logind[1461]: New session 3 of user core. Mar 17 18:50:33.699982 systemd[1]: Started sshd@1-10.200.20.41:22-10.200.16.10:54558.service. Mar 17 18:50:33.817224 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 17 18:50:33.817398 systemd[1]: Stopped kubelet.service. Mar 17 18:50:33.818775 systemd[1]: Starting kubelet.service... Mar 17 18:50:34.147131 sshd[1842]: Accepted publickey for core from 10.200.16.10 port 54558 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:50:34.148520 sshd[1842]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:50:34.153033 systemd[1]: Started session-4.scope. Mar 17 18:50:34.154195 systemd-logind[1461]: New session 4 of user core. Mar 17 18:50:34.252839 systemd[1]: Started kubelet.service. Mar 17 18:50:34.289390 kubelet[1849]: E0317 18:50:34.289350 1849 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:34.291403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:34.291528 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:34.560559 systemd[1]: Started sshd@2-10.200.20.41:22-10.200.16.10:54562.service. Mar 17 18:50:37.313916 sshd[1842]: pam_unix(sshd:session): session closed for user core Mar 17 18:50:37.316322 systemd[1]: sshd@1-10.200.20.41:22-10.200.16.10:54558.service: Deactivated successfully. Mar 17 18:50:37.319213 sshd[1856]: Accepted publickey for core from 10.200.16.10 port 54562 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:50:37.317106 sshd[1856]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:50:37.317052 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:50:37.317592 systemd-logind[1461]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:50:37.320468 systemd-logind[1461]: Removed session 4. Mar 17 18:50:37.323360 systemd-logind[1461]: New session 5 of user core. Mar 17 18:50:37.323878 systemd[1]: Started session-5.scope. Mar 17 18:50:37.595926 sshd[1856]: pam_unix(sshd:session): session closed for user core Mar 17 18:50:37.598600 systemd[1]: sshd@2-10.200.20.41:22-10.200.16.10:54562.service: Deactivated successfully. Mar 17 18:50:37.599223 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:50:37.599761 systemd-logind[1461]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:50:37.600573 systemd-logind[1461]: Removed session 5. Mar 17 18:50:37.676480 systemd[1]: Started sshd@3-10.200.20.41:22-10.200.16.10:54564.service. Mar 17 18:50:38.165876 sshd[1863]: Accepted publickey for core from 10.200.16.10 port 54564 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:50:38.167149 sshd[1863]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:50:38.171443 systemd[1]: Started session-6.scope. Mar 17 18:50:38.172472 systemd-logind[1461]: New session 6 of user core. Mar 17 18:50:38.532002 sshd[1863]: pam_unix(sshd:session): session closed for user core Mar 17 18:50:38.534286 systemd[1]: sshd@3-10.200.20.41:22-10.200.16.10:54564.service: Deactivated successfully. Mar 17 18:50:38.534945 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:50:38.535519 systemd-logind[1461]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:50:38.536448 systemd-logind[1461]: Removed session 6. Mar 17 18:50:38.612414 systemd[1]: Started sshd@4-10.200.20.41:22-10.200.16.10:55474.service. Mar 17 18:50:39.102844 sshd[1869]: Accepted publickey for core from 10.200.16.10 port 55474 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:50:39.104113 sshd[1869]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:50:39.107899 systemd-logind[1461]: New session 7 of user core. Mar 17 18:50:39.108286 systemd[1]: Started session-7.scope. Mar 17 18:50:40.738921 sudo[1872]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:50:40.739547 sudo[1872]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:50:40.760887 systemd[1]: Starting docker.service... Mar 17 18:50:40.793230 env[1882]: time="2025-03-17T18:50:40.793181737Z" level=info msg="Starting up" Mar 17 18:50:40.797799 env[1882]: time="2025-03-17T18:50:40.797770136Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:50:40.797926 env[1882]: time="2025-03-17T18:50:40.797912535Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:50:40.797997 env[1882]: time="2025-03-17T18:50:40.797981614Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:50:40.798048 env[1882]: time="2025-03-17T18:50:40.798036574Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:50:40.799546 env[1882]: time="2025-03-17T18:50:40.799523440Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:50:40.799679 env[1882]: time="2025-03-17T18:50:40.799665559Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:50:40.799766 env[1882]: time="2025-03-17T18:50:40.799751118Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:50:40.799835 env[1882]: time="2025-03-17T18:50:40.799823198Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:50:40.804964 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3457032981-merged.mount: Deactivated successfully. Mar 17 18:50:40.855578 env[1882]: time="2025-03-17T18:50:40.855543540Z" level=info msg="Loading containers: start." Mar 17 18:50:41.055661 kernel: Initializing XFRM netlink socket Mar 17 18:50:41.079371 env[1882]: time="2025-03-17T18:50:41.079339397Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:50:41.247057 systemd-networkd[1634]: docker0: Link UP Mar 17 18:50:41.274831 env[1882]: time="2025-03-17T18:50:41.274771537Z" level=info msg="Loading containers: done." Mar 17 18:50:41.283960 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck158424560-merged.mount: Deactivated successfully. Mar 17 18:50:41.296099 env[1882]: time="2025-03-17T18:50:41.296058832Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:50:41.296259 env[1882]: time="2025-03-17T18:50:41.296238790Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:50:41.296363 env[1882]: time="2025-03-17T18:50:41.296345109Z" level=info msg="Daemon has completed initialization" Mar 17 18:50:41.329235 systemd[1]: Started docker.service. Mar 17 18:50:41.335232 env[1882]: time="2025-03-17T18:50:41.335167092Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:50:42.329931 env[1471]: time="2025-03-17T18:50:42.329887992Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 17 18:50:43.173974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1232678595.mount: Deactivated successfully. Mar 17 18:50:44.317177 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 17 18:50:44.317415 systemd[1]: Stopped kubelet.service. Mar 17 18:50:44.318781 systemd[1]: Starting kubelet.service... Mar 17 18:50:44.412317 systemd[1]: Started kubelet.service. Mar 17 18:50:44.452534 kubelet[2003]: E0317 18:50:44.452477 2003 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:44.454872 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:44.454989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:47.414245 env[1471]: time="2025-03-17T18:50:47.414183974Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:47.461937 env[1471]: time="2025-03-17T18:50:47.461868299Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:47.523941 env[1471]: time="2025-03-17T18:50:47.523882877Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:47.569274 env[1471]: time="2025-03-17T18:50:47.569230739Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:47.571014 env[1471]: time="2025-03-17T18:50:47.570979846Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\"" Mar 17 18:50:47.571637 env[1471]: time="2025-03-17T18:50:47.571603602Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 17 18:50:52.662251 env[1471]: time="2025-03-17T18:50:52.662190297Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:52.718183 env[1471]: time="2025-03-17T18:50:52.718145690Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:52.764269 env[1471]: time="2025-03-17T18:50:52.764230667Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:52.829394 env[1471]: time="2025-03-17T18:50:52.829351720Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:52.831222 env[1471]: time="2025-03-17T18:50:52.831172428Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\"" Mar 17 18:50:52.832069 env[1471]: time="2025-03-17T18:50:52.832042702Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 17 18:50:54.567183 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 17 18:50:54.567349 systemd[1]: Stopped kubelet.service. Mar 17 18:50:54.568700 systemd[1]: Starting kubelet.service... Mar 17 18:50:55.974711 systemd[1]: Started kubelet.service. Mar 17 18:50:56.008887 kubelet[2012]: E0317 18:50:56.008834 2012 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:56.011563 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:56.011716 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:51:03.461815 env[1471]: time="2025-03-17T18:51:03.461752877Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:03.524755 env[1471]: time="2025-03-17T18:51:03.524716638Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:03.572211 env[1471]: time="2025-03-17T18:51:03.572170158Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:03.617982 env[1471]: time="2025-03-17T18:51:03.617914647Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:03.619048 env[1471]: time="2025-03-17T18:51:03.619009361Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\"" Mar 17 18:51:03.619669 env[1471]: time="2025-03-17T18:51:03.619644718Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 17 18:51:05.705426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3262053392.mount: Deactivated successfully. Mar 17 18:51:06.067188 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 17 18:51:06.067349 systemd[1]: Stopped kubelet.service. Mar 17 18:51:06.068740 systemd[1]: Starting kubelet.service... Mar 17 18:51:06.164681 systemd[1]: Started kubelet.service. Mar 17 18:51:06.212437 kubelet[2022]: E0317 18:51:06.212388 2022 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:51:06.214409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:51:06.214540 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:51:06.588911 env[1471]: time="2025-03-17T18:51:06.588854140Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:06.595126 env[1471]: time="2025-03-17T18:51:06.595066510Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:06.598542 env[1471]: time="2025-03-17T18:51:06.598487094Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:06.602740 env[1471]: time="2025-03-17T18:51:06.602691114Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:06.603606 env[1471]: time="2025-03-17T18:51:06.603160352Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\"" Mar 17 18:51:06.604101 env[1471]: time="2025-03-17T18:51:06.603929588Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 17 18:51:07.358387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2375852788.mount: Deactivated successfully. Mar 17 18:51:08.657140 env[1471]: time="2025-03-17T18:51:08.657084465Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:08.662342 env[1471]: time="2025-03-17T18:51:08.662289082Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:08.666546 env[1471]: time="2025-03-17T18:51:08.666491463Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:08.670729 env[1471]: time="2025-03-17T18:51:08.670688844Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:08.671611 env[1471]: time="2025-03-17T18:51:08.671582480Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Mar 17 18:51:08.672252 env[1471]: time="2025-03-17T18:51:08.672231277Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 18:51:09.702770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2794042755.mount: Deactivated successfully. Mar 17 18:51:09.822742 env[1471]: time="2025-03-17T18:51:09.822703782Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:09.915895 env[1471]: time="2025-03-17T18:51:09.915824170Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:09.920410 env[1471]: time="2025-03-17T18:51:09.920371549Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:09.964988 env[1471]: time="2025-03-17T18:51:09.964833232Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:09.965845 env[1471]: time="2025-03-17T18:51:09.965816148Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 17 18:51:09.966409 env[1471]: time="2025-03-17T18:51:09.966375025Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 17 18:51:11.475364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176366473.mount: Deactivated successfully. Mar 17 18:51:16.317245 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Mar 17 18:51:16.317421 systemd[1]: Stopped kubelet.service. Mar 17 18:51:16.318833 systemd[1]: Starting kubelet.service... Mar 17 18:51:20.062825 waagent[1681]: 2025-03-17T18:51:20.062711Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Mar 17 18:51:20.071887 waagent[1681]: 2025-03-17T18:51:20.071801Z INFO ExtHandler Mar 17 18:51:20.072069 waagent[1681]: 2025-03-17T18:51:20.072015Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: edb5fd0a-0abc-4e95-be92-42d6da203e12 eTag: 4138857171548110949 source: Fabric] Mar 17 18:51:20.072925 waagent[1681]: 2025-03-17T18:51:20.072848Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 17 18:51:20.074239 waagent[1681]: 2025-03-17T18:51:20.074150Z INFO ExtHandler Mar 17 18:51:20.074360 waagent[1681]: 2025-03-17T18:51:20.074311Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Mar 17 18:51:20.154407 waagent[1681]: 2025-03-17T18:51:20.154335Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 17 18:51:24.840966 systemd[1]: Started kubelet.service. Mar 17 18:51:24.842532 waagent[1681]: 2025-03-17T18:51:24.842268Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C6E7F39F62FCBD1B584508ADE5C9E58A299CCD58', 'hasPrivateKey': True} Mar 17 18:51:24.843597 waagent[1681]: 2025-03-17T18:51:24.843506Z INFO ExtHandler Downloaded certificate {'thumbprint': '3F0198BBBD713DE4488AFF9082D4243CE0A01D14', 'hasPrivateKey': False} Mar 17 18:51:24.844777 waagent[1681]: 2025-03-17T18:51:24.844702Z INFO ExtHandler Fetch goal state completed Mar 17 18:51:24.845871 waagent[1681]: 2025-03-17T18:51:24.845797Z INFO ExtHandler ExtHandler VM enabled for RSM updates, switching to RSM update mode Mar 17 18:51:24.847267 waagent[1681]: 2025-03-17T18:51:24.847182Z INFO ExtHandler ExtHandler Mar 17 18:51:24.847404 waagent[1681]: 2025-03-17T18:51:24.847348Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 56758d81-3668-4e32-b41a-3bbc59fc52e1 correlation 43a6769f-b4dc-45b4-b9b5-b52384738a5f created: 2025-03-17T18:51:08.457878Z] Mar 17 18:51:24.848352 waagent[1681]: 2025-03-17T18:51:24.848267Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 17 18:51:24.850352 waagent[1681]: 2025-03-17T18:51:24.850276Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 3 ms] Mar 17 18:51:24.881470 kubelet[2040]: E0317 18:51:24.881431 2040 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:51:24.883585 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:51:24.883731 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:51:29.658263 env[1471]: time="2025-03-17T18:51:29.658214881Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:29.669034 env[1471]: time="2025-03-17T18:51:29.668994568Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:29.673121 env[1471]: time="2025-03-17T18:51:29.673084715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:29.678411 env[1471]: time="2025-03-17T18:51:29.678354179Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:29.679384 env[1471]: time="2025-03-17T18:51:29.679351456Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Mar 17 18:51:35.067299 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Mar 17 18:51:35.067462 systemd[1]: Stopped kubelet.service. Mar 17 18:51:35.068851 systemd[1]: Starting kubelet.service... Mar 17 18:51:35.370134 systemd[1]: Started kubelet.service. Mar 17 18:51:35.419425 kubelet[2066]: E0317 18:51:35.419387 2066 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:51:35.421578 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:51:35.421720 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:51:35.656707 systemd[1]: Stopped kubelet.service. Mar 17 18:51:35.659517 systemd[1]: Starting kubelet.service... Mar 17 18:51:35.689021 systemd[1]: Reloading. Mar 17 18:51:35.770819 /usr/lib/systemd/system-generators/torcx-generator[2101]: time="2025-03-17T18:51:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:51:35.770848 /usr/lib/systemd/system-generators/torcx-generator[2101]: time="2025-03-17T18:51:35Z" level=info msg="torcx already run" Mar 17 18:51:35.821239 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:51:35.821436 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:51:35.837477 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:51:36.064576 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 18:51:36.064680 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 18:51:36.065069 systemd[1]: Stopped kubelet.service. Mar 17 18:51:36.067369 systemd[1]: Starting kubelet.service... Mar 17 18:51:41.385210 systemd[1]: Started kubelet.service. Mar 17 18:51:41.433083 kubelet[2162]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:51:41.433476 kubelet[2162]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 18:51:41.433528 kubelet[2162]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:51:41.433711 kubelet[2162]: I0317 18:51:41.433681 2162 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:51:41.994885 kubelet[2162]: I0317 18:51:41.994845 2162 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 18:51:41.995052 kubelet[2162]: I0317 18:51:41.995042 2162 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:51:41.995387 kubelet[2162]: I0317 18:51:41.995373 2162 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 18:51:42.012784 kubelet[2162]: E0317 18:51:42.012744 2162 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:51:42.014133 kubelet[2162]: I0317 18:51:42.014101 2162 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:51:42.020717 kubelet[2162]: E0317 18:51:42.020678 2162 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:51:42.020717 kubelet[2162]: I0317 18:51:42.020716 2162 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:51:42.024234 kubelet[2162]: I0317 18:51:42.024189 2162 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:51:42.025213 kubelet[2162]: I0317 18:51:42.025165 2162 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:51:42.025460 kubelet[2162]: I0317 18:51:42.025210 2162 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-a-2552a29e1b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:51:42.025563 kubelet[2162]: I0317 18:51:42.025465 2162 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:51:42.025563 kubelet[2162]: I0317 18:51:42.025474 2162 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 18:51:42.025613 kubelet[2162]: I0317 18:51:42.025608 2162 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:51:42.029914 kubelet[2162]: I0317 18:51:42.029880 2162 kubelet.go:446] "Attempting to sync node with API server" Mar 17 18:51:42.029914 kubelet[2162]: I0317 18:51:42.029915 2162 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:51:42.030051 kubelet[2162]: I0317 18:51:42.029935 2162 kubelet.go:352] "Adding apiserver pod source" Mar 17 18:51:42.030051 kubelet[2162]: I0317 18:51:42.029946 2162 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:51:42.039332 kubelet[2162]: W0317 18:51:42.039262 2162 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-2552a29e1b&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Mar 17 18:51:42.039472 kubelet[2162]: E0317 18:51:42.039340 2162 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-2552a29e1b&limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:51:42.039472 kubelet[2162]: I0317 18:51:42.039441 2162 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:51:42.039939 kubelet[2162]: I0317 18:51:42.039915 2162 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:51:42.040002 kubelet[2162]: W0317 18:51:42.039975 2162 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:51:42.040542 kubelet[2162]: I0317 18:51:42.040503 2162 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 18:51:42.040542 kubelet[2162]: I0317 18:51:42.040543 2162 server.go:1287] "Started kubelet" Mar 17 18:51:42.040725 kubelet[2162]: W0317 18:51:42.040688 2162 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Mar 17 18:51:42.040815 kubelet[2162]: E0317 18:51:42.040733 2162 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:51:42.052210 kubelet[2162]: E0317 18:51:42.052182 2162 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:51:42.052592 kubelet[2162]: I0317 18:51:42.052569 2162 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:51:42.053544 kubelet[2162]: I0317 18:51:42.053528 2162 server.go:490] "Adding debug handlers to kubelet server" Mar 17 18:51:42.055170 kubelet[2162]: I0317 18:51:42.055120 2162 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:51:42.056445 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:51:42.056559 kubelet[2162]: I0317 18:51:42.056515 2162 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:51:42.056769 kubelet[2162]: I0317 18:51:42.056680 2162 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:51:42.060027 kubelet[2162]: I0317 18:51:42.059995 2162 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:51:42.061437 kubelet[2162]: E0317 18:51:42.061303 2162 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.41:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.41:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-a-2552a29e1b.182dabc6ff7d4ed9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-a-2552a29e1b,UID:ci-3510.3.7-a-2552a29e1b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-a-2552a29e1b,},FirstTimestamp:2025-03-17 18:51:42.040526553 +0000 UTC m=+0.649824875,LastTimestamp:2025-03-17 18:51:42.040526553 +0000 UTC m=+0.649824875,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-a-2552a29e1b,}" Mar 17 18:51:42.062154 kubelet[2162]: I0317 18:51:42.062129 2162 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 18:51:42.062375 kubelet[2162]: I0317 18:51:42.062360 2162 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:51:42.062503 kubelet[2162]: I0317 18:51:42.062491 2162 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:51:42.063041 kubelet[2162]: W0317 18:51:42.062998 2162 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Mar 17 18:51:42.063199 kubelet[2162]: E0317 18:51:42.063150 2162 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:51:42.064572 kubelet[2162]: E0317 18:51:42.064546 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:42.064947 kubelet[2162]: E0317 18:51:42.064921 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-2552a29e1b?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="200ms" Mar 17 18:51:42.065261 kubelet[2162]: I0317 18:51:42.065239 2162 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:51:42.065348 kubelet[2162]: I0317 18:51:42.065338 2162 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:51:42.065498 kubelet[2162]: I0317 18:51:42.065482 2162 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:51:42.145846 kubelet[2162]: I0317 18:51:42.145824 2162 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 18:51:42.145986 kubelet[2162]: I0317 18:51:42.145974 2162 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 18:51:42.146054 kubelet[2162]: I0317 18:51:42.146045 2162 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:51:42.165744 kubelet[2162]: E0317 18:51:42.165719 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:42.266811 kubelet[2162]: E0317 18:51:42.265560 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-2552a29e1b?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="400ms" Mar 17 18:51:42.266994 kubelet[2162]: E0317 18:51:42.266973 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:42.367423 kubelet[2162]: E0317 18:51:42.367395 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:42.467672 kubelet[2162]: E0317 18:51:42.467623 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:42.568595 kubelet[2162]: E0317 18:51:42.568570 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:42.666237 kubelet[2162]: E0317 18:51:42.666203 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-2552a29e1b?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="800ms" Mar 17 18:51:42.669360 kubelet[2162]: E0317 18:51:42.669339 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:42.769834 kubelet[2162]: E0317 18:51:42.769815 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:42.870644 kubelet[2162]: E0317 18:51:42.870292 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:42.876805 kubelet[2162]: W0317 18:51:42.876783 2162 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-2552a29e1b&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Mar 17 18:51:42.876943 kubelet[2162]: E0317 18:51:42.876924 2162 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-2552a29e1b&limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:51:42.970618 kubelet[2162]: E0317 18:51:42.970580 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:43.064137 kubelet[2162]: W0317 18:51:43.064106 2162 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Mar 17 18:51:43.064235 kubelet[2162]: E0317 18:51:43.064153 2162 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:51:43.071654 kubelet[2162]: E0317 18:51:43.071621 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:43.172539 kubelet[2162]: E0317 18:51:43.172280 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:43.273061 kubelet[2162]: E0317 18:51:43.273026 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:43.373670 kubelet[2162]: E0317 18:51:43.373642 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:43.400200 kubelet[2162]: W0317 18:51:43.400175 2162 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Mar 17 18:51:43.400263 kubelet[2162]: E0317 18:51:43.400213 2162 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:51:43.467204 kubelet[2162]: E0317 18:51:43.466938 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-2552a29e1b?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="1.6s" Mar 17 18:51:43.474288 kubelet[2162]: E0317 18:51:43.474258 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:43.575298 kubelet[2162]: E0317 18:51:43.575269 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:43.675939 kubelet[2162]: E0317 18:51:43.675920 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:43.776748 kubelet[2162]: E0317 18:51:43.776512 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:43.877160 kubelet[2162]: E0317 18:51:43.877124 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:43.977837 kubelet[2162]: E0317 18:51:43.977804 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:44.078001 kubelet[2162]: E0317 18:51:44.077961 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:44.078413 kubelet[2162]: E0317 18:51:44.078387 2162 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:51:44.178787 kubelet[2162]: E0317 18:51:44.178742 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:44.279400 kubelet[2162]: E0317 18:51:44.279362 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:44.759963 kubelet[2162]: E0317 18:51:44.379919 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:44.759963 kubelet[2162]: E0317 18:51:44.480570 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:44.759963 kubelet[2162]: E0317 18:51:44.581495 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:44.759963 kubelet[2162]: I0317 18:51:44.588585 2162 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:51:44.759963 kubelet[2162]: I0317 18:51:44.589557 2162 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:51:44.759963 kubelet[2162]: I0317 18:51:44.589580 2162 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 18:51:44.759963 kubelet[2162]: I0317 18:51:44.589602 2162 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 18:51:44.759963 kubelet[2162]: I0317 18:51:44.589609 2162 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 18:51:44.759963 kubelet[2162]: E0317 18:51:44.589674 2162 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:51:44.759963 kubelet[2162]: W0317 18:51:44.592826 2162 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Mar 17 18:51:44.759963 kubelet[2162]: E0317 18:51:44.592862 2162 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:51:44.759963 kubelet[2162]: E0317 18:51:44.681658 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:44.760711 kubelet[2162]: E0317 18:51:44.689772 2162 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:51:44.782033 kubelet[2162]: E0317 18:51:44.781991 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:44.821413 kubelet[2162]: I0317 18:51:44.821387 2162 policy_none.go:49] "None policy: Start" Mar 17 18:51:44.821515 kubelet[2162]: I0317 18:51:44.821506 2162 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 18:51:44.821585 kubelet[2162]: I0317 18:51:44.821576 2162 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:51:44.882787 kubelet[2162]: E0317 18:51:44.882741 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:44.889882 kubelet[2162]: E0317 18:51:44.889860 2162 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:51:44.922214 systemd[1]: Created slice kubepods.slice. Mar 17 18:51:44.926986 systemd[1]: Created slice kubepods-burstable.slice. Mar 17 18:51:44.929685 systemd[1]: Created slice kubepods-besteffort.slice. Mar 17 18:51:44.940674 kubelet[2162]: I0317 18:51:44.940645 2162 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:51:44.941482 kubelet[2162]: I0317 18:51:44.941465 2162 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:51:44.941622 kubelet[2162]: I0317 18:51:44.941586 2162 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:51:44.942129 kubelet[2162]: I0317 18:51:44.942114 2162 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:51:44.944264 kubelet[2162]: E0317 18:51:44.944242 2162 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 18:51:44.944417 kubelet[2162]: E0317 18:51:44.944405 2162 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:45.044299 kubelet[2162]: I0317 18:51:45.044201 2162 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:45.045370 kubelet[2162]: E0317 18:51:45.045339 2162 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:45.068373 kubelet[2162]: E0317 18:51:45.068342 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-2552a29e1b?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="3.2s" Mar 17 18:51:45.120971 kubelet[2162]: E0317 18:51:45.120869 2162 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.41:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.41:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-a-2552a29e1b.182dabc6ff7d4ed9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-a-2552a29e1b,UID:ci-3510.3.7-a-2552a29e1b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-a-2552a29e1b,},FirstTimestamp:2025-03-17 18:51:42.040526553 +0000 UTC m=+0.649824875,LastTimestamp:2025-03-17 18:51:42.040526553 +0000 UTC m=+0.649824875,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-a-2552a29e1b,}" Mar 17 18:51:45.248020 kubelet[2162]: I0317 18:51:45.247982 2162 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:45.248332 kubelet[2162]: E0317 18:51:45.248306 2162 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:45.298971 systemd[1]: Created slice kubepods-burstable-pod61ca954dca68fe24ff1bd96c32a1c0e3.slice. Mar 17 18:51:45.308967 kubelet[2162]: E0317 18:51:45.308937 2162 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:45.311962 systemd[1]: Created slice kubepods-burstable-pod4f7ba850e65a4d8eccb5da9348afbf8a.slice. Mar 17 18:51:45.313901 kubelet[2162]: E0317 18:51:45.313618 2162 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:45.326062 systemd[1]: Created slice kubepods-burstable-podf81bcfa5672c6940629a9d8728753c31.slice. Mar 17 18:51:45.328025 kubelet[2162]: E0317 18:51:45.327995 2162 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:45.349907 kubelet[2162]: W0317 18:51:45.349848 2162 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Mar 17 18:51:45.350102 kubelet[2162]: E0317 18:51:45.350081 2162 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:51:45.381893 kubelet[2162]: I0317 18:51:45.381866 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61ca954dca68fe24ff1bd96c32a1c0e3-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-a-2552a29e1b\" (UID: \"61ca954dca68fe24ff1bd96c32a1c0e3\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:45.382060 kubelet[2162]: I0317 18:51:45.382046 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f7ba850e65a4d8eccb5da9348afbf8a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-a-2552a29e1b\" (UID: \"4f7ba850e65a4d8eccb5da9348afbf8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:45.382183 kubelet[2162]: I0317 18:51:45.382170 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f81bcfa5672c6940629a9d8728753c31-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-a-2552a29e1b\" (UID: \"f81bcfa5672c6940629a9d8728753c31\") " pod="kube-system/kube-scheduler-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:45.382270 kubelet[2162]: I0317 18:51:45.382258 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61ca954dca68fe24ff1bd96c32a1c0e3-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-a-2552a29e1b\" (UID: \"61ca954dca68fe24ff1bd96c32a1c0e3\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:45.382352 kubelet[2162]: I0317 18:51:45.382339 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61ca954dca68fe24ff1bd96c32a1c0e3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-a-2552a29e1b\" (UID: \"61ca954dca68fe24ff1bd96c32a1c0e3\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:45.382465 kubelet[2162]: I0317 18:51:45.382453 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f7ba850e65a4d8eccb5da9348afbf8a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-a-2552a29e1b\" (UID: \"4f7ba850e65a4d8eccb5da9348afbf8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:45.382560 kubelet[2162]: I0317 18:51:45.382549 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f7ba850e65a4d8eccb5da9348afbf8a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-a-2552a29e1b\" (UID: \"4f7ba850e65a4d8eccb5da9348afbf8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:45.382679 kubelet[2162]: I0317 18:51:45.382667 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f7ba850e65a4d8eccb5da9348afbf8a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-a-2552a29e1b\" (UID: \"4f7ba850e65a4d8eccb5da9348afbf8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:45.382791 kubelet[2162]: I0317 18:51:45.382779 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f7ba850e65a4d8eccb5da9348afbf8a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-a-2552a29e1b\" (UID: \"4f7ba850e65a4d8eccb5da9348afbf8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:45.408021 kubelet[2162]: W0317 18:51:45.407980 2162 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-2552a29e1b&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Mar 17 18:51:45.408205 kubelet[2162]: E0317 18:51:45.408189 2162 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-2552a29e1b&limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:51:45.611098 env[1471]: time="2025-03-17T18:51:45.610773975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-a-2552a29e1b,Uid:61ca954dca68fe24ff1bd96c32a1c0e3,Namespace:kube-system,Attempt:0,}" Mar 17 18:51:45.615366 env[1471]: time="2025-03-17T18:51:45.615198484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-a-2552a29e1b,Uid:4f7ba850e65a4d8eccb5da9348afbf8a,Namespace:kube-system,Attempt:0,}" Mar 17 18:51:45.629236 env[1471]: time="2025-03-17T18:51:45.629192450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-a-2552a29e1b,Uid:f81bcfa5672c6940629a9d8728753c31,Namespace:kube-system,Attempt:0,}" Mar 17 18:51:45.631902 kubelet[2162]: W0317 18:51:45.631875 2162 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Mar 17 18:51:45.632059 kubelet[2162]: E0317 18:51:45.632042 2162 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:51:45.650737 kubelet[2162]: I0317 18:51:45.650705 2162 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:45.651148 kubelet[2162]: E0317 18:51:45.651123 2162 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:46.321075 kubelet[2162]: W0317 18:51:46.321014 2162 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Mar 17 18:51:46.321418 kubelet[2162]: E0317 18:51:46.321082 2162 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:51:46.452776 kubelet[2162]: I0317 18:51:46.452756 2162 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:46.453230 kubelet[2162]: E0317 18:51:46.453211 2162 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:48.055666 kubelet[2162]: I0317 18:51:48.055613 2162 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:48.056513 kubelet[2162]: E0317 18:51:48.056486 2162 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:48.144090 kubelet[2162]: E0317 18:51:48.144057 2162 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:51:48.253247 kubelet[2162]: W0317 18:51:48.253219 2162 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Mar 17 18:51:48.253407 kubelet[2162]: E0317 18:51:48.253390 2162 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:51:48.269253 kubelet[2162]: E0317 18:51:48.269224 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-2552a29e1b?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="6.4s" Mar 17 18:51:48.782823 kubelet[2162]: W0317 18:51:48.782787 2162 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Mar 17 18:51:48.782981 kubelet[2162]: E0317 18:51:48.782829 2162 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:51:49.876103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1591355266.mount: Deactivated successfully. Mar 17 18:51:49.899185 env[1471]: time="2025-03-17T18:51:49.899141689Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:49.915056 env[1471]: time="2025-03-17T18:51:49.914998252Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:49.920310 env[1471]: time="2025-03-17T18:51:49.920272200Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:49.924010 env[1471]: time="2025-03-17T18:51:49.923968512Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:49.934114 env[1471]: time="2025-03-17T18:51:49.934073848Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:49.942755 env[1471]: time="2025-03-17T18:51:49.942703189Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:49.946560 env[1471]: time="2025-03-17T18:51:49.946522620Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:49.949709 env[1471]: time="2025-03-17T18:51:49.949664933Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:49.953113 env[1471]: time="2025-03-17T18:51:49.953082125Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:49.957645 env[1471]: time="2025-03-17T18:51:49.957596394Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:49.968171 env[1471]: time="2025-03-17T18:51:49.968118610Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:49.979780 env[1471]: time="2025-03-17T18:51:49.979736864Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:51:50.022977 env[1471]: time="2025-03-17T18:51:50.022897085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:51:50.023239 env[1471]: time="2025-03-17T18:51:50.022981365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:51:50.023239 env[1471]: time="2025-03-17T18:51:50.023039805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:51:50.023398 env[1471]: time="2025-03-17T18:51:50.023358844Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/11c6057cf5c15a72e0c8adfae6c33d90c24a32de1a5b8fe965580f7ff3d1d499 pid=2202 runtime=io.containerd.runc.v2 Mar 17 18:51:50.041602 systemd[1]: Started cri-containerd-11c6057cf5c15a72e0c8adfae6c33d90c24a32de1a5b8fe965580f7ff3d1d499.scope. Mar 17 18:51:50.056802 env[1471]: time="2025-03-17T18:51:50.056704689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:51:50.056802 env[1471]: time="2025-03-17T18:51:50.056762848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:51:50.057002 env[1471]: time="2025-03-17T18:51:50.056773568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:51:50.057002 env[1471]: time="2025-03-17T18:51:50.056917728Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d41d96dd9dec5f0360a97dcbdb2d01210f381ae31b3fc3b0ea1dceddc87826a0 pid=2228 runtime=io.containerd.runc.v2 Mar 17 18:51:50.073622 env[1471]: time="2025-03-17T18:51:50.073436171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:51:50.073622 env[1471]: time="2025-03-17T18:51:50.073528570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:51:50.073884 env[1471]: time="2025-03-17T18:51:50.073847690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:51:50.074165 env[1471]: time="2025-03-17T18:51:50.074114689Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2c7f5ac743aaf0e72e5c0e8cff2b44a69626b01ebb62491ea28738a56792feec pid=2259 runtime=io.containerd.runc.v2 Mar 17 18:51:50.084386 systemd[1]: Started cri-containerd-d41d96dd9dec5f0360a97dcbdb2d01210f381ae31b3fc3b0ea1dceddc87826a0.scope. Mar 17 18:51:50.108464 env[1471]: time="2025-03-17T18:51:50.108421491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-a-2552a29e1b,Uid:f81bcfa5672c6940629a9d8728753c31,Namespace:kube-system,Attempt:0,} returns sandbox id \"11c6057cf5c15a72e0c8adfae6c33d90c24a32de1a5b8fe965580f7ff3d1d499\"" Mar 17 18:51:50.112819 env[1471]: time="2025-03-17T18:51:50.112779041Z" level=info msg="CreateContainer within sandbox \"11c6057cf5c15a72e0c8adfae6c33d90c24a32de1a5b8fe965580f7ff3d1d499\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:51:50.118454 systemd[1]: Started cri-containerd-2c7f5ac743aaf0e72e5c0e8cff2b44a69626b01ebb62491ea28738a56792feec.scope. Mar 17 18:51:50.127798 env[1471]: time="2025-03-17T18:51:50.127669488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-a-2552a29e1b,Uid:4f7ba850e65a4d8eccb5da9348afbf8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d41d96dd9dec5f0360a97dcbdb2d01210f381ae31b3fc3b0ea1dceddc87826a0\"" Mar 17 18:51:50.131996 env[1471]: time="2025-03-17T18:51:50.131952158Z" level=info msg="CreateContainer within sandbox \"d41d96dd9dec5f0360a97dcbdb2d01210f381ae31b3fc3b0ea1dceddc87826a0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:51:50.165159 env[1471]: time="2025-03-17T18:51:50.165117523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-a-2552a29e1b,Uid:61ca954dca68fe24ff1bd96c32a1c0e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c7f5ac743aaf0e72e5c0e8cff2b44a69626b01ebb62491ea28738a56792feec\"" Mar 17 18:51:50.167995 env[1471]: time="2025-03-17T18:51:50.167948396Z" level=info msg="CreateContainer within sandbox \"2c7f5ac743aaf0e72e5c0e8cff2b44a69626b01ebb62491ea28738a56792feec\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:51:50.275769 env[1471]: time="2025-03-17T18:51:50.275721952Z" level=info msg="CreateContainer within sandbox \"11c6057cf5c15a72e0c8adfae6c33d90c24a32de1a5b8fe965580f7ff3d1d499\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"41246707d1d1b3e82553354c79dc835dca58e4308064aa4ef688fc0933d2671e\"" Mar 17 18:51:50.276587 env[1471]: time="2025-03-17T18:51:50.276553550Z" level=info msg="StartContainer for \"41246707d1d1b3e82553354c79dc835dca58e4308064aa4ef688fc0933d2671e\"" Mar 17 18:51:50.294669 systemd[1]: Started cri-containerd-41246707d1d1b3e82553354c79dc835dca58e4308064aa4ef688fc0933d2671e.scope. Mar 17 18:51:51.259162 kubelet[2162]: I0317 18:51:51.258787 2162 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:51.259162 kubelet[2162]: E0317 18:51:51.259110 2162 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:51.535429 kubelet[2162]: W0317 18:51:51.535322 2162 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-2552a29e1b&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Mar 17 18:51:51.535429 kubelet[2162]: E0317 18:51:51.535366 2162 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-2552a29e1b&limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:51:51.617377 env[1471]: time="2025-03-17T18:51:51.617273006Z" level=info msg="StartContainer for \"41246707d1d1b3e82553354c79dc835dca58e4308064aa4ef688fc0933d2671e\" returns successfully" Mar 17 18:51:51.628873 kubelet[2162]: E0317 18:51:51.628840 2162 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:51.899482 kubelet[2162]: W0317 18:51:51.899423 2162 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Mar 17 18:51:52.070910 kubelet[2162]: E0317 18:51:51.899618 2162 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:51:52.117270 env[1471]: time="2025-03-17T18:51:52.117221487Z" level=info msg="CreateContainer within sandbox \"d41d96dd9dec5f0360a97dcbdb2d01210f381ae31b3fc3b0ea1dceddc87826a0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"539ec0e2afa76a114b075883ca8886fc3aff5dd5a1194b6016a13cac7c78b8d9\"" Mar 17 18:51:52.117796 env[1471]: time="2025-03-17T18:51:52.117767806Z" level=info msg="StartContainer for \"539ec0e2afa76a114b075883ca8886fc3aff5dd5a1194b6016a13cac7c78b8d9\"" Mar 17 18:51:52.139030 systemd[1]: Started cri-containerd-539ec0e2afa76a114b075883ca8886fc3aff5dd5a1194b6016a13cac7c78b8d9.scope. Mar 17 18:51:52.215140 env[1471]: time="2025-03-17T18:51:52.215030190Z" level=info msg="CreateContainer within sandbox \"2c7f5ac743aaf0e72e5c0e8cff2b44a69626b01ebb62491ea28738a56792feec\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3abacd558393a32eb07af1ed6ffc51b33ac268a04219e1ea491069159b5f6c25\"" Mar 17 18:51:52.215482 env[1471]: time="2025-03-17T18:51:52.215377390Z" level=info msg="StartContainer for \"539ec0e2afa76a114b075883ca8886fc3aff5dd5a1194b6016a13cac7c78b8d9\" returns successfully" Mar 17 18:51:52.216173 env[1471]: time="2025-03-17T18:51:52.216146348Z" level=info msg="StartContainer for \"3abacd558393a32eb07af1ed6ffc51b33ac268a04219e1ea491069159b5f6c25\"" Mar 17 18:51:52.247405 systemd[1]: Started cri-containerd-3abacd558393a32eb07af1ed6ffc51b33ac268a04219e1ea491069159b5f6c25.scope. Mar 17 18:51:52.285154 env[1471]: time="2025-03-17T18:51:52.285106115Z" level=info msg="StartContainer for \"3abacd558393a32eb07af1ed6ffc51b33ac268a04219e1ea491069159b5f6c25\" returns successfully" Mar 17 18:51:52.339768 kubelet[2162]: W0317 18:51:52.339699 2162 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Mar 17 18:51:52.340104 kubelet[2162]: E0317 18:51:52.339774 2162 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:51:52.631083 kubelet[2162]: E0317 18:51:52.631051 2162 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:52.633756 kubelet[2162]: E0317 18:51:52.633503 2162 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:52.633756 kubelet[2162]: E0317 18:51:52.633620 2162 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:53.125541 systemd[1]: run-containerd-runc-k8s.io-3abacd558393a32eb07af1ed6ffc51b33ac268a04219e1ea491069159b5f6c25-runc.3tuAwm.mount: Deactivated successfully. Mar 17 18:51:53.634982 kubelet[2162]: E0317 18:51:53.634945 2162 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:53.635330 kubelet[2162]: E0317 18:51:53.635313 2162 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:54.636612 kubelet[2162]: E0317 18:51:54.636581 2162 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:54.717067 kubelet[2162]: E0317 18:51:54.717019 2162 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.7-a-2552a29e1b\" not found" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:54.808416 kubelet[2162]: E0317 18:51:54.808384 2162 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510.3.7-a-2552a29e1b" not found Mar 17 18:51:54.944771 kubelet[2162]: E0317 18:51:54.944622 2162 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:55.407173 kubelet[2162]: E0317 18:51:55.407132 2162 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510.3.7-a-2552a29e1b" not found Mar 17 18:51:55.440973 kubelet[2162]: E0317 18:51:55.440939 2162 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:55.852020 kubelet[2162]: E0317 18:51:55.851991 2162 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510.3.7-a-2552a29e1b" not found Mar 17 18:51:56.787333 kubelet[2162]: E0317 18:51:56.787304 2162 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510.3.7-a-2552a29e1b" not found Mar 17 18:51:57.661434 kubelet[2162]: I0317 18:51:57.661398 2162 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:57.669541 kubelet[2162]: I0317 18:51:57.669499 2162 kubelet_node_status.go:79] "Successfully registered node" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:57.669541 kubelet[2162]: E0317 18:51:57.669543 2162 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-3510.3.7-a-2552a29e1b\": node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:57.673640 kubelet[2162]: E0317 18:51:57.673600 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:57.774080 kubelet[2162]: E0317 18:51:57.774009 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:57.874749 kubelet[2162]: E0317 18:51:57.874701 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:57.930851 kubelet[2162]: E0317 18:51:57.930699 2162 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:57.975562 kubelet[2162]: E0317 18:51:57.975523 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:58.070437 systemd[1]: Reloading. Mar 17 18:51:58.076655 kubelet[2162]: E0317 18:51:58.076600 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:58.151543 /usr/lib/systemd/system-generators/torcx-generator[2457]: time="2025-03-17T18:51:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:51:58.151933 /usr/lib/systemd/system-generators/torcx-generator[2457]: time="2025-03-17T18:51:58Z" level=info msg="torcx already run" Mar 17 18:51:58.177968 kubelet[2162]: E0317 18:51:58.177928 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:58.226037 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:51:58.226062 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:51:58.244726 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:51:58.278536 kubelet[2162]: E0317 18:51:58.278493 2162 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:58.360027 systemd[1]: Stopping kubelet.service... Mar 17 18:51:58.376081 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:51:58.376294 systemd[1]: Stopped kubelet.service. Mar 17 18:51:58.378196 systemd[1]: Starting kubelet.service... Mar 17 18:51:58.473298 systemd[1]: Started kubelet.service. Mar 17 18:51:58.553310 kubelet[2520]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:51:58.553310 kubelet[2520]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 18:51:58.553310 kubelet[2520]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:51:58.553841 kubelet[2520]: I0317 18:51:58.553797 2520 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:51:58.568735 kubelet[2520]: I0317 18:51:58.568689 2520 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 18:51:58.568735 kubelet[2520]: I0317 18:51:58.568721 2520 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:51:58.569506 kubelet[2520]: I0317 18:51:58.569484 2520 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 18:51:58.570845 kubelet[2520]: I0317 18:51:58.570823 2520 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:51:58.573081 kubelet[2520]: I0317 18:51:58.573053 2520 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:51:58.578840 kubelet[2520]: E0317 18:51:58.578788 2520 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:51:58.578840 kubelet[2520]: I0317 18:51:58.578838 2520 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:51:58.581674 kubelet[2520]: I0317 18:51:58.581652 2520 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:51:58.581856 kubelet[2520]: I0317 18:51:58.581823 2520 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:51:58.582024 kubelet[2520]: I0317 18:51:58.581853 2520 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-a-2552a29e1b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:51:58.582115 kubelet[2520]: I0317 18:51:58.582029 2520 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:51:58.582115 kubelet[2520]: I0317 18:51:58.582038 2520 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 18:51:58.582115 kubelet[2520]: I0317 18:51:58.582076 2520 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:51:58.582197 kubelet[2520]: I0317 18:51:58.582191 2520 kubelet.go:446] "Attempting to sync node with API server" Mar 17 18:51:58.582228 kubelet[2520]: I0317 18:51:58.582202 2520 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:51:58.582228 kubelet[2520]: I0317 18:51:58.582219 2520 kubelet.go:352] "Adding apiserver pod source" Mar 17 18:51:58.582272 kubelet[2520]: I0317 18:51:58.582230 2520 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:51:58.583985 kubelet[2520]: I0317 18:51:58.583716 2520 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:51:58.585894 kubelet[2520]: I0317 18:51:58.585861 2520 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:51:58.586312 kubelet[2520]: I0317 18:51:58.586281 2520 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 18:51:58.586312 kubelet[2520]: I0317 18:51:58.586314 2520 server.go:1287] "Started kubelet" Mar 17 18:51:58.590037 kubelet[2520]: I0317 18:51:58.590001 2520 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:51:58.591580 kubelet[2520]: I0317 18:51:58.591551 2520 server.go:490] "Adding debug handlers to kubelet server" Mar 17 18:51:58.593100 kubelet[2520]: I0317 18:51:58.593027 2520 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:51:58.593289 kubelet[2520]: I0317 18:51:58.593264 2520 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:51:58.596162 kubelet[2520]: I0317 18:51:58.596133 2520 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:51:58.605869 kubelet[2520]: I0317 18:51:58.605833 2520 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:51:58.607512 kubelet[2520]: I0317 18:51:58.607486 2520 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 18:51:58.609167 kubelet[2520]: E0317 18:51:58.609137 2520 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-2552a29e1b\" not found" Mar 17 18:51:58.613961 kubelet[2520]: I0317 18:51:58.613930 2520 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:51:58.615033 kubelet[2520]: I0317 18:51:58.615016 2520 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:51:58.617765 kubelet[2520]: I0317 18:51:58.617726 2520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:51:58.618790 kubelet[2520]: I0317 18:51:58.618767 2520 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:51:58.618908 kubelet[2520]: I0317 18:51:58.618897 2520 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 18:51:58.618989 kubelet[2520]: I0317 18:51:58.618979 2520 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 18:51:58.619038 kubelet[2520]: I0317 18:51:58.619030 2520 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 18:51:58.619143 kubelet[2520]: E0317 18:51:58.619126 2520 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:51:58.634137 kubelet[2520]: I0317 18:51:58.634098 2520 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:51:58.645082 kubelet[2520]: E0317 18:51:58.645044 2520 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:51:58.645531 kubelet[2520]: I0317 18:51:58.645506 2520 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:51:58.645531 kubelet[2520]: I0317 18:51:58.645524 2520 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:51:58.695929 kubelet[2520]: I0317 18:51:58.695903 2520 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 18:51:58.696097 kubelet[2520]: I0317 18:51:58.696082 2520 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 18:51:58.696169 kubelet[2520]: I0317 18:51:58.696160 2520 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:51:58.696425 kubelet[2520]: I0317 18:51:58.696411 2520 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:51:58.696511 kubelet[2520]: I0317 18:51:58.696486 2520 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:51:58.696564 kubelet[2520]: I0317 18:51:58.696555 2520 policy_none.go:49] "None policy: Start" Mar 17 18:51:58.696741 kubelet[2520]: I0317 18:51:58.696716 2520 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 18:51:58.696787 kubelet[2520]: I0317 18:51:58.696748 2520 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:51:58.696920 kubelet[2520]: I0317 18:51:58.696905 2520 state_mem.go:75] "Updated machine memory state" Mar 17 18:51:58.700793 kubelet[2520]: I0317 18:51:58.700765 2520 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:51:58.700940 kubelet[2520]: I0317 18:51:58.700919 2520 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:51:58.700979 kubelet[2520]: I0317 18:51:58.700937 2520 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:51:58.701523 kubelet[2520]: I0317 18:51:58.701510 2520 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:51:58.703224 kubelet[2520]: E0317 18:51:58.703206 2520 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 18:51:58.720291 kubelet[2520]: I0317 18:51:58.720262 2520 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:58.720750 kubelet[2520]: I0317 18:51:58.720734 2520 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:58.722246 kubelet[2520]: I0317 18:51:58.722228 2520 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:58.732730 kubelet[2520]: W0317 18:51:58.732702 2520 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:51:58.732954 kubelet[2520]: W0317 18:51:58.732941 2520 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:51:58.736362 kubelet[2520]: W0317 18:51:58.736334 2520 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:51:58.804012 kubelet[2520]: I0317 18:51:58.803911 2520 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:58.815831 kubelet[2520]: I0317 18:51:58.815800 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f7ba850e65a4d8eccb5da9348afbf8a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-a-2552a29e1b\" (UID: \"4f7ba850e65a4d8eccb5da9348afbf8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:58.816050 kubelet[2520]: I0317 18:51:58.816035 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f7ba850e65a4d8eccb5da9348afbf8a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-a-2552a29e1b\" (UID: \"4f7ba850e65a4d8eccb5da9348afbf8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:58.816150 kubelet[2520]: I0317 18:51:58.816135 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f81bcfa5672c6940629a9d8728753c31-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-a-2552a29e1b\" (UID: \"f81bcfa5672c6940629a9d8728753c31\") " pod="kube-system/kube-scheduler-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:58.816271 kubelet[2520]: I0317 18:51:58.816259 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f7ba850e65a4d8eccb5da9348afbf8a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-a-2552a29e1b\" (UID: \"4f7ba850e65a4d8eccb5da9348afbf8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:58.816368 kubelet[2520]: I0317 18:51:58.816357 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f7ba850e65a4d8eccb5da9348afbf8a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-a-2552a29e1b\" (UID: \"4f7ba850e65a4d8eccb5da9348afbf8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:58.816466 kubelet[2520]: I0317 18:51:58.816451 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f7ba850e65a4d8eccb5da9348afbf8a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-a-2552a29e1b\" (UID: \"4f7ba850e65a4d8eccb5da9348afbf8a\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:58.816561 kubelet[2520]: I0317 18:51:58.816550 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61ca954dca68fe24ff1bd96c32a1c0e3-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-a-2552a29e1b\" (UID: \"61ca954dca68fe24ff1bd96c32a1c0e3\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:58.816674 kubelet[2520]: I0317 18:51:58.816661 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61ca954dca68fe24ff1bd96c32a1c0e3-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-a-2552a29e1b\" (UID: \"61ca954dca68fe24ff1bd96c32a1c0e3\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:58.816790 kubelet[2520]: I0317 18:51:58.816763 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61ca954dca68fe24ff1bd96c32a1c0e3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-a-2552a29e1b\" (UID: \"61ca954dca68fe24ff1bd96c32a1c0e3\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:58.818443 kubelet[2520]: I0317 18:51:58.818403 2520 kubelet_node_status.go:125] "Node was previously registered" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:58.818564 kubelet[2520]: I0317 18:51:58.818500 2520 kubelet_node_status.go:79] "Successfully registered node" node="ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:59.583600 kubelet[2520]: I0317 18:51:59.583568 2520 apiserver.go:52] "Watching apiserver" Mar 17 18:51:59.615745 kubelet[2520]: I0317 18:51:59.615711 2520 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:51:59.620607 kubelet[2520]: I0317 18:51:59.620506 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-a-2552a29e1b" podStartSLOduration=1.6204880149999998 podStartE2EDuration="1.620488015s" podCreationTimestamp="2025-03-17 18:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:51:59.609713637 +0000 UTC m=+1.131809972" watchObservedRunningTime="2025-03-17 18:51:59.620488015 +0000 UTC m=+1.142584350" Mar 17 18:51:59.632104 kubelet[2520]: I0317 18:51:59.632000 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-a-2552a29e1b" podStartSLOduration=1.6319804310000001 podStartE2EDuration="1.631980431s" podCreationTimestamp="2025-03-17 18:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:51:59.620668294 +0000 UTC m=+1.142764629" watchObservedRunningTime="2025-03-17 18:51:59.631980431 +0000 UTC m=+1.154076766" Mar 17 18:51:59.671558 kubelet[2520]: I0317 18:51:59.671531 2520 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:59.672248 kubelet[2520]: I0317 18:51:59.672223 2520 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:59.688008 kubelet[2520]: W0317 18:51:59.687968 2520 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:51:59.688164 kubelet[2520]: E0317 18:51:59.688041 2520 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.7-a-2552a29e1b\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:59.688556 kubelet[2520]: W0317 18:51:59.688537 2520 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:51:59.688722 kubelet[2520]: E0317 18:51:59.688706 2520 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.7-a-2552a29e1b\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.7-a-2552a29e1b" Mar 17 18:51:59.689606 kubelet[2520]: I0317 18:51:59.689560 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2552a29e1b" podStartSLOduration=1.689545912 podStartE2EDuration="1.689545912s" podCreationTimestamp="2025-03-17 18:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:51:59.632980349 +0000 UTC m=+1.155076684" watchObservedRunningTime="2025-03-17 18:51:59.689545912 +0000 UTC m=+1.211642247" Mar 17 18:52:01.980072 sudo[2551]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:52:01.980623 sudo[2551]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 18:52:02.322292 kubelet[2520]: I0317 18:52:02.322259 2520 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:52:02.323197 env[1471]: time="2025-03-17T18:52:02.323154105Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:52:02.323462 kubelet[2520]: I0317 18:52:02.323381 2520 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:52:02.485241 sudo[2551]: pam_unix(sudo:session): session closed for user root Mar 17 18:52:04.421298 kubelet[2520]: I0317 18:52:03.265350 2520 status_manager.go:890] "Failed to get status for pod" podUID="561bfd32-af00-4e73-92f8-473d7cf3782c" pod="kube-system/kube-proxy-jmkgh" err="pods \"kube-proxy-jmkgh\" is forbidden: User \"system:node:ci-3510.3.7-a-2552a29e1b\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-a-2552a29e1b' and this object" Mar 17 18:52:04.421298 kubelet[2520]: W0317 18:52:03.266819 2520 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.7-a-2552a29e1b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-a-2552a29e1b' and this object Mar 17 18:52:04.421298 kubelet[2520]: E0317 18:52:03.266851 2520 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-3510.3.7-a-2552a29e1b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-a-2552a29e1b' and this object" logger="UnhandledError" Mar 17 18:52:04.421298 kubelet[2520]: W0317 18:52:03.267011 2520 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.7-a-2552a29e1b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-a-2552a29e1b' and this object Mar 17 18:52:04.421298 kubelet[2520]: E0317 18:52:03.267028 2520 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-3510.3.7-a-2552a29e1b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-a-2552a29e1b' and this object" logger="UnhandledError" Mar 17 18:52:03.265619 systemd[1]: Created slice kubepods-besteffort-pod561bfd32_af00_4e73_92f8_473d7cf3782c.slice. Mar 17 18:52:04.421936 kubelet[2520]: I0317 18:52:03.337762 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/561bfd32-af00-4e73-92f8-473d7cf3782c-kube-proxy\") pod \"kube-proxy-jmkgh\" (UID: \"561bfd32-af00-4e73-92f8-473d7cf3782c\") " pod="kube-system/kube-proxy-jmkgh" Mar 17 18:52:04.421936 kubelet[2520]: I0317 18:52:03.337800 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/561bfd32-af00-4e73-92f8-473d7cf3782c-lib-modules\") pod \"kube-proxy-jmkgh\" (UID: \"561bfd32-af00-4e73-92f8-473d7cf3782c\") " pod="kube-system/kube-proxy-jmkgh" Mar 17 18:52:04.421936 kubelet[2520]: I0317 18:52:03.337820 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/561bfd32-af00-4e73-92f8-473d7cf3782c-xtables-lock\") pod \"kube-proxy-jmkgh\" (UID: \"561bfd32-af00-4e73-92f8-473d7cf3782c\") " pod="kube-system/kube-proxy-jmkgh" Mar 17 18:52:04.421936 kubelet[2520]: I0317 18:52:03.337837 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flgzb\" (UniqueName: \"kubernetes.io/projected/561bfd32-af00-4e73-92f8-473d7cf3782c-kube-api-access-flgzb\") pod \"kube-proxy-jmkgh\" (UID: \"561bfd32-af00-4e73-92f8-473d7cf3782c\") " pod="kube-system/kube-proxy-jmkgh" Mar 17 18:52:04.456940 kubelet[2520]: E0317 18:52:04.456888 2520 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:52:04.456940 kubelet[2520]: E0317 18:52:04.456937 2520 projected.go:194] Error preparing data for projected volume kube-api-access-flgzb for pod kube-system/kube-proxy-jmkgh: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:52:04.457129 kubelet[2520]: E0317 18:52:04.457009 2520 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/561bfd32-af00-4e73-92f8-473d7cf3782c-kube-api-access-flgzb podName:561bfd32-af00-4e73-92f8-473d7cf3782c nodeName:}" failed. No retries permitted until 2025-03-17 18:52:04.956988647 +0000 UTC m=+6.479084982 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-flgzb" (UniqueName: "kubernetes.io/projected/561bfd32-af00-4e73-92f8-473d7cf3782c-kube-api-access-flgzb") pod "kube-proxy-jmkgh" (UID: "561bfd32-af00-4e73-92f8-473d7cf3782c") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:52:05.049010 kubelet[2520]: I0317 18:52:05.048977 2520 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 17 18:52:05.323555 env[1471]: time="2025-03-17T18:52:05.323499779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jmkgh,Uid:561bfd32-af00-4e73-92f8-473d7cf3782c,Namespace:kube-system,Attempt:0,}" Mar 17 18:52:05.330924 systemd[1]: Created slice kubepods-burstable-pod217492c1_c939_4ee7_9e07_a9bb84d6162e.slice. Mar 17 18:52:05.339096 systemd[1]: Created slice kubepods-besteffort-podba857b3f_fcb2_4377_bbe3_c2d24bfad0b4.slice. Mar 17 18:52:05.351087 kubelet[2520]: I0317 18:52:05.351054 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-bpf-maps\") pod \"cilium-jg28l\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " pod="kube-system/cilium-jg28l" Mar 17 18:52:05.351291 kubelet[2520]: I0317 18:52:05.351276 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-etc-cni-netd\") pod \"cilium-jg28l\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " pod="kube-system/cilium-jg28l" Mar 17 18:52:05.351389 kubelet[2520]: I0317 18:52:05.351372 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjm6z\" (UniqueName: \"kubernetes.io/projected/ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4-kube-api-access-qjm6z\") pod \"cilium-operator-6c4d7847fc-xdlv4\" (UID: \"ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4\") " pod="kube-system/cilium-operator-6c4d7847fc-xdlv4" Mar 17 18:52:05.351474 kubelet[2520]: I0317 18:52:05.351462 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-xtables-lock\") pod \"cilium-jg28l\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " pod="kube-system/cilium-jg28l" Mar 17 18:52:05.351564 kubelet[2520]: I0317 18:52:05.351553 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-host-proc-sys-net\") pod \"cilium-jg28l\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " pod="kube-system/cilium-jg28l" Mar 17 18:52:05.351659 kubelet[2520]: I0317 18:52:05.351647 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-cni-path\") pod \"cilium-jg28l\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " pod="kube-system/cilium-jg28l" Mar 17 18:52:05.351746 kubelet[2520]: I0317 18:52:05.351734 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-lib-modules\") pod \"cilium-jg28l\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " pod="kube-system/cilium-jg28l" Mar 17 18:52:05.351828 kubelet[2520]: I0317 18:52:05.351817 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/217492c1-c939-4ee7-9e07-a9bb84d6162e-cilium-config-path\") pod \"cilium-jg28l\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " pod="kube-system/cilium-jg28l" Mar 17 18:52:05.351961 kubelet[2520]: I0317 18:52:05.351939 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/217492c1-c939-4ee7-9e07-a9bb84d6162e-clustermesh-secrets\") pod \"cilium-jg28l\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " pod="kube-system/cilium-jg28l" Mar 17 18:52:05.352056 kubelet[2520]: I0317 18:52:05.352044 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-hostproc\") pod \"cilium-jg28l\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " pod="kube-system/cilium-jg28l" Mar 17 18:52:05.352135 kubelet[2520]: I0317 18:52:05.352124 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-cilium-cgroup\") pod \"cilium-jg28l\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " pod="kube-system/cilium-jg28l" Mar 17 18:52:05.352216 kubelet[2520]: I0317 18:52:05.352205 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-host-proc-sys-kernel\") pod \"cilium-jg28l\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " pod="kube-system/cilium-jg28l" Mar 17 18:52:05.352300 kubelet[2520]: I0317 18:52:05.352289 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr9rk\" (UniqueName: \"kubernetes.io/projected/217492c1-c939-4ee7-9e07-a9bb84d6162e-kube-api-access-dr9rk\") pod \"cilium-jg28l\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " pod="kube-system/cilium-jg28l" Mar 17 18:52:05.352382 kubelet[2520]: I0317 18:52:05.352371 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-cilium-run\") pod \"cilium-jg28l\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " pod="kube-system/cilium-jg28l" Mar 17 18:52:05.352468 kubelet[2520]: I0317 18:52:05.352453 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/217492c1-c939-4ee7-9e07-a9bb84d6162e-hubble-tls\") pod \"cilium-jg28l\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " pod="kube-system/cilium-jg28l" Mar 17 18:52:05.352558 kubelet[2520]: I0317 18:52:05.352546 2520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xdlv4\" (UID: \"ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4\") " pod="kube-system/cilium-operator-6c4d7847fc-xdlv4" Mar 17 18:52:05.934143 env[1471]: time="2025-03-17T18:52:05.934099182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jg28l,Uid:217492c1-c939-4ee7-9e07-a9bb84d6162e,Namespace:kube-system,Attempt:0,}" Mar 17 18:52:05.942167 env[1471]: time="2025-03-17T18:52:05.942119326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xdlv4,Uid:ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4,Namespace:kube-system,Attempt:0,}" Mar 17 18:52:06.817395 env[1471]: time="2025-03-17T18:52:06.817265183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:52:06.817395 env[1471]: time="2025-03-17T18:52:06.817313423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:52:06.817395 env[1471]: time="2025-03-17T18:52:06.817325023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:52:06.817959 env[1471]: time="2025-03-17T18:52:06.817900621Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/296003a7f03154c345f9897cd624edf033c3ae28b45d2d6d3b852596e969248b pid=2578 runtime=io.containerd.runc.v2 Mar 17 18:52:06.831308 systemd[1]: Started cri-containerd-296003a7f03154c345f9897cd624edf033c3ae28b45d2d6d3b852596e969248b.scope. Mar 17 18:52:06.834981 systemd[1]: run-containerd-runc-k8s.io-296003a7f03154c345f9897cd624edf033c3ae28b45d2d6d3b852596e969248b-runc.c6PotS.mount: Deactivated successfully. Mar 17 18:52:06.867242 env[1471]: time="2025-03-17T18:52:06.867189726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jmkgh,Uid:561bfd32-af00-4e73-92f8-473d7cf3782c,Namespace:kube-system,Attempt:0,} returns sandbox id \"296003a7f03154c345f9897cd624edf033c3ae28b45d2d6d3b852596e969248b\"" Mar 17 18:52:06.873071 env[1471]: time="2025-03-17T18:52:06.872997794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:52:06.873212 env[1471]: time="2025-03-17T18:52:06.873075314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:52:06.873212 env[1471]: time="2025-03-17T18:52:06.873104234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:52:06.873445 env[1471]: time="2025-03-17T18:52:06.873348554Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e pid=2619 runtime=io.containerd.runc.v2 Mar 17 18:52:06.874339 env[1471]: time="2025-03-17T18:52:06.874296832Z" level=info msg="CreateContainer within sandbox \"296003a7f03154c345f9897cd624edf033c3ae28b45d2d6d3b852596e969248b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:52:06.884697 systemd[1]: Started cri-containerd-9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e.scope. Mar 17 18:52:06.909769 env[1471]: time="2025-03-17T18:52:06.909720643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jg28l,Uid:217492c1-c939-4ee7-9e07-a9bb84d6162e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e\"" Mar 17 18:52:06.913285 env[1471]: time="2025-03-17T18:52:06.913251556Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:52:06.928939 env[1471]: time="2025-03-17T18:52:06.928856726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:52:06.928939 env[1471]: time="2025-03-17T18:52:06.928903206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:52:06.928939 env[1471]: time="2025-03-17T18:52:06.928914966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:52:06.929331 env[1471]: time="2025-03-17T18:52:06.929297325Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1bc2fa55c98ad5e84c310a3354e2fd847fbdfed97293dc402aade3428bfd3bd pid=2664 runtime=io.containerd.runc.v2 Mar 17 18:52:06.940846 systemd[1]: Started cri-containerd-a1bc2fa55c98ad5e84c310a3354e2fd847fbdfed97293dc402aade3428bfd3bd.scope. Mar 17 18:52:06.975525 env[1471]: time="2025-03-17T18:52:06.975472395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xdlv4,Uid:ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1bc2fa55c98ad5e84c310a3354e2fd847fbdfed97293dc402aade3428bfd3bd\"" Mar 17 18:52:07.274966 env[1471]: time="2025-03-17T18:52:07.274196418Z" level=info msg="CreateContainer within sandbox \"296003a7f03154c345f9897cd624edf033c3ae28b45d2d6d3b852596e969248b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8ec4ecde96be517923c0c05581b3d6080cbc2711e2fc4688e70d34f7e1b0ae92\"" Mar 17 18:52:07.275322 env[1471]: time="2025-03-17T18:52:07.275292736Z" level=info msg="StartContainer for \"8ec4ecde96be517923c0c05581b3d6080cbc2711e2fc4688e70d34f7e1b0ae92\"" Mar 17 18:52:07.291299 systemd[1]: Started cri-containerd-8ec4ecde96be517923c0c05581b3d6080cbc2711e2fc4688e70d34f7e1b0ae92.scope. Mar 17 18:52:07.323892 env[1471]: time="2025-03-17T18:52:07.323849442Z" level=info msg="StartContainer for \"8ec4ecde96be517923c0c05581b3d6080cbc2711e2fc4688e70d34f7e1b0ae92\" returns successfully" Mar 17 18:52:07.710971 kubelet[2520]: I0317 18:52:07.710914 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jmkgh" podStartSLOduration=4.710897055 podStartE2EDuration="4.710897055s" podCreationTimestamp="2025-03-17 18:52:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:52:07.710457056 +0000 UTC m=+9.232553391" watchObservedRunningTime="2025-03-17 18:52:07.710897055 +0000 UTC m=+9.232993390" Mar 17 18:52:21.472946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount224124706.mount: Deactivated successfully. Mar 17 18:52:28.585852 env[1471]: time="2025-03-17T18:52:28.584643312Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:28.594068 env[1471]: time="2025-03-17T18:52:28.594002096Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:28.598210 env[1471]: time="2025-03-17T18:52:28.598155289Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:28.598858 env[1471]: time="2025-03-17T18:52:28.598756088Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 18:52:28.600159 env[1471]: time="2025-03-17T18:52:28.599952526Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:52:28.604664 env[1471]: time="2025-03-17T18:52:28.603034121Z" level=info msg="CreateContainer within sandbox \"9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:52:28.637190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount629936010.mount: Deactivated successfully. Mar 17 18:52:28.641999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount607196071.mount: Deactivated successfully. Mar 17 18:52:28.654748 env[1471]: time="2025-03-17T18:52:28.654691473Z" level=info msg="CreateContainer within sandbox \"9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea\"" Mar 17 18:52:28.657343 env[1471]: time="2025-03-17T18:52:28.657168389Z" level=info msg="StartContainer for \"3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea\"" Mar 17 18:52:28.675517 systemd[1]: Started cri-containerd-3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea.scope. Mar 17 18:52:28.685885 systemd[1]: cri-containerd-3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea.scope: Deactivated successfully. Mar 17 18:52:29.634002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea-rootfs.mount: Deactivated successfully. Mar 17 18:52:33.075089 env[1471]: time="2025-03-17T18:52:30.678772810Z" level=error msg="get state for 3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea" error="context deadline exceeded: unknown" Mar 17 18:52:33.075089 env[1471]: time="2025-03-17T18:52:30.678857850Z" level=warning msg="unknown status" status=0 Mar 17 18:52:33.075089 env[1471]: time="2025-03-17T18:52:32.780140327Z" level=error msg="get state for 3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea" error="context deadline exceeded: unknown" Mar 17 18:52:33.075089 env[1471]: time="2025-03-17T18:52:32.780229567Z" level=warning msg="unknown status" status=0 Mar 17 18:52:33.625446 env[1471]: time="2025-03-17T18:52:33.625372559Z" level=info msg="shim disconnected" id=3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea Mar 17 18:52:33.625618 env[1471]: time="2025-03-17T18:52:33.625452839Z" level=warning msg="cleaning up after shim disconnected" id=3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea namespace=k8s.io Mar 17 18:52:33.625618 env[1471]: time="2025-03-17T18:52:33.625463318Z" level=info msg="cleaning up dead shim" Mar 17 18:52:33.632923 env[1471]: time="2025-03-17T18:52:33.632862386Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:52:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2888 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T18:52:33Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 18:52:33.633245 env[1471]: time="2025-03-17T18:52:33.633141666Z" level=error msg="copy shim log" error="read /proc/self/fd/31: file already closed" Mar 17 18:52:33.633935 env[1471]: time="2025-03-17T18:52:33.633898344Z" level=error msg="Failed to pipe stderr of container \"3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea\"" error="reading from a closed fifo" Mar 17 18:52:33.634083 env[1471]: time="2025-03-17T18:52:33.634043944Z" level=error msg="Failed to pipe stdout of container \"3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea\"" error="reading from a closed fifo" Mar 17 18:52:33.674301 env[1471]: time="2025-03-17T18:52:33.674214317Z" level=error msg="StartContainer for \"3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 18:52:33.674555 kubelet[2520]: E0317 18:52:33.674512 2520 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea" Mar 17 18:52:33.674873 kubelet[2520]: E0317 18:52:33.674694 2520 kuberuntime_manager.go:1341] "Unhandled Error" err=< Mar 17 18:52:33.674873 kubelet[2520]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 18:52:33.674873 kubelet[2520]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 18:52:33.674873 kubelet[2520]: rm /hostbin/cilium-mount Mar 17 18:52:33.674977 kubelet[2520]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dr9rk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 18:52:33.674977 kubelet[2520]: > logger="UnhandledError" Mar 17 18:52:33.676295 kubelet[2520]: E0317 18:52:33.676222 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:52:33.741726 env[1471]: time="2025-03-17T18:52:33.741684245Z" level=info msg="CreateContainer within sandbox \"9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Mar 17 18:52:33.871217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3375531318.mount: Deactivated successfully. Mar 17 18:52:33.878423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3185686752.mount: Deactivated successfully. Mar 17 18:52:34.022228 env[1471]: time="2025-03-17T18:52:34.022167738Z" level=info msg="CreateContainer within sandbox \"9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2\"" Mar 17 18:52:34.022736 env[1471]: time="2025-03-17T18:52:34.022710337Z" level=info msg="StartContainer for \"fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2\"" Mar 17 18:52:34.042090 systemd[1]: Started cri-containerd-fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2.scope. Mar 17 18:52:34.053357 systemd[1]: cri-containerd-fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2.scope: Deactivated successfully. Mar 17 18:52:34.868478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2-rootfs.mount: Deactivated successfully. Mar 17 18:52:35.174166 env[1471]: time="2025-03-17T18:52:35.173663736Z" level=info msg="shim disconnected" id=fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2 Mar 17 18:52:35.174166 env[1471]: time="2025-03-17T18:52:35.173722586Z" level=warning msg="cleaning up after shim disconnected" id=fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2 namespace=k8s.io Mar 17 18:52:35.174166 env[1471]: time="2025-03-17T18:52:35.173731548Z" level=info msg="cleaning up dead shim" Mar 17 18:52:35.181128 env[1471]: time="2025-03-17T18:52:35.181076845Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:52:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2926 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T18:52:35Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 18:52:35.181550 env[1471]: time="2025-03-17T18:52:35.181499840Z" level=error msg="copy shim log" error="read /proc/self/fd/32: file already closed" Mar 17 18:52:35.182152 env[1471]: time="2025-03-17T18:52:35.181775289Z" level=error msg="Failed to pipe stdout of container \"fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2\"" error="reading from a closed fifo" Mar 17 18:52:35.182261 env[1471]: time="2025-03-17T18:52:35.181933436Z" level=error msg="Failed to pipe stderr of container \"fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2\"" error="reading from a closed fifo" Mar 17 18:52:35.266692 env[1471]: time="2025-03-17T18:52:35.266600912Z" level=error msg="StartContainer for \"fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 18:52:35.267146 kubelet[2520]: E0317 18:52:35.267084 2520 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2" Mar 17 18:52:35.267705 kubelet[2520]: E0317 18:52:35.267669 2520 kuberuntime_manager.go:1341] "Unhandled Error" err=< Mar 17 18:52:35.267705 kubelet[2520]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 18:52:35.267705 kubelet[2520]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 18:52:35.267705 kubelet[2520]: rm /hostbin/cilium-mount Mar 17 18:52:35.267705 kubelet[2520]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dr9rk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 18:52:35.267705 kubelet[2520]: > logger="UnhandledError" Mar 17 18:52:35.269332 kubelet[2520]: E0317 18:52:35.269152 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:52:35.744470 kubelet[2520]: I0317 18:52:35.744434 2520 scope.go:117] "RemoveContainer" containerID="3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea" Mar 17 18:52:35.744865 kubelet[2520]: I0317 18:52:35.744837 2520 scope.go:117] "RemoveContainer" containerID="3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea" Mar 17 18:52:35.746287 env[1471]: time="2025-03-17T18:52:35.746253520Z" level=info msg="RemoveContainer for \"3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea\"" Mar 17 18:52:35.747211 env[1471]: time="2025-03-17T18:52:35.747174203Z" level=info msg="RemoveContainer for \"3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea\"" Mar 17 18:52:35.747415 env[1471]: time="2025-03-17T18:52:35.747390201Z" level=error msg="RemoveContainer for \"3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea\" failed" error="failed to set removing state for container \"3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea\": container is already in removing state" Mar 17 18:52:35.747736 kubelet[2520]: E0317 18:52:35.747711 2520 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea\": container is already in removing state" containerID="3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea" Mar 17 18:52:35.747865 kubelet[2520]: E0317 18:52:35.747847 2520 kuberuntime_container.go:897] "Unhandled Error" err="failed to remove pod init container \"mount-cgroup\": rpc error: code = Unknown desc = failed to set removing state for container \"3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea\": container is already in removing state; Skipping pod \"cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" logger="UnhandledError" Mar 17 18:52:35.749130 kubelet[2520]: E0317 18:52:35.749104 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:52:35.775131 env[1471]: time="2025-03-17T18:52:35.775076331Z" level=info msg="RemoveContainer for \"3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea\" returns successfully" Mar 17 18:52:36.427889 kubelet[2520]: W0317 18:52:36.427832 2520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod217492c1_c939_4ee7_9e07_a9bb84d6162e.slice/cri-containerd-3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea.scope WatchSource:0}: container "3b9e6ae06335e39710164e951b54b6d61b4af3b97c95b7c901f318921a2da6ea" in namespace "k8s.io": not found Mar 17 18:52:36.748318 kubelet[2520]: E0317 18:52:36.747934 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:52:37.924346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3424370045.mount: Deactivated successfully. Mar 17 18:52:39.539807 kubelet[2520]: W0317 18:52:39.539765 2520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod217492c1_c939_4ee7_9e07_a9bb84d6162e.slice/cri-containerd-fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2.scope WatchSource:0}: task fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2 not found: not found Mar 17 18:52:39.918097 env[1471]: time="2025-03-17T18:52:39.918032660Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:39.964838 env[1471]: time="2025-03-17T18:52:39.964781707Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:40.011362 env[1471]: time="2025-03-17T18:52:40.011322509Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:52:40.011836 env[1471]: time="2025-03-17T18:52:40.011807069Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 18:52:40.015613 env[1471]: time="2025-03-17T18:52:40.015545805Z" level=info msg="CreateContainer within sandbox \"a1bc2fa55c98ad5e84c310a3354e2fd847fbdfed97293dc402aade3428bfd3bd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:52:40.175314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4211728600.mount: Deactivated successfully. Mar 17 18:52:40.183304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount944166315.mount: Deactivated successfully. Mar 17 18:52:40.267316 env[1471]: time="2025-03-17T18:52:40.267253370Z" level=info msg="CreateContainer within sandbox \"a1bc2fa55c98ad5e84c310a3354e2fd847fbdfed97293dc402aade3428bfd3bd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0\"" Mar 17 18:52:40.269101 env[1471]: time="2025-03-17T18:52:40.269058628Z" level=info msg="StartContainer for \"f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0\"" Mar 17 18:52:40.288362 systemd[1]: Started cri-containerd-f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0.scope. Mar 17 18:52:40.324713 env[1471]: time="2025-03-17T18:52:40.324654022Z" level=info msg="StartContainer for \"f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0\" returns successfully" Mar 17 18:52:50.624086 env[1471]: time="2025-03-17T18:52:50.624029055Z" level=info msg="CreateContainer within sandbox \"9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Mar 17 18:52:50.648346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2212207566.mount: Deactivated successfully. Mar 17 18:52:50.654134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2792648490.mount: Deactivated successfully. Mar 17 18:52:50.664030 env[1471]: time="2025-03-17T18:52:50.663978021Z" level=info msg="CreateContainer within sandbox \"9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06\"" Mar 17 18:52:50.664521 env[1471]: time="2025-03-17T18:52:50.664488215Z" level=info msg="StartContainer for \"b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06\"" Mar 17 18:52:50.685504 systemd[1]: Started cri-containerd-b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06.scope. Mar 17 18:52:50.696077 systemd[1]: cri-containerd-b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06.scope: Deactivated successfully. Mar 17 18:52:50.696381 systemd[1]: Stopped cri-containerd-b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06.scope. Mar 17 18:52:51.031125 env[1471]: time="2025-03-17T18:52:51.030536033Z" level=info msg="shim disconnected" id=b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06 Mar 17 18:52:51.031125 env[1471]: time="2025-03-17T18:52:51.030589760Z" level=warning msg="cleaning up after shim disconnected" id=b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06 namespace=k8s.io Mar 17 18:52:51.031125 env[1471]: time="2025-03-17T18:52:51.030598681Z" level=info msg="cleaning up dead shim" Mar 17 18:52:51.038231 env[1471]: time="2025-03-17T18:52:51.038165391Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:52:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3002 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T18:52:51Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 18:52:51.038517 env[1471]: time="2025-03-17T18:52:51.038443711Z" level=error msg="copy shim log" error="read /proc/self/fd/95: file already closed" Mar 17 18:52:51.040767 env[1471]: time="2025-03-17T18:52:51.040718192Z" level=error msg="Failed to pipe stdout of container \"b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06\"" error="reading from a closed fifo" Mar 17 18:52:51.040949 env[1471]: time="2025-03-17T18:52:51.040914500Z" level=error msg="Failed to pipe stderr of container \"b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06\"" error="reading from a closed fifo" Mar 17 18:52:51.044901 env[1471]: time="2025-03-17T18:52:51.044837735Z" level=error msg="StartContainer for \"b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 18:52:51.045581 kubelet[2520]: E0317 18:52:51.045332 2520 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06" Mar 17 18:52:51.045581 kubelet[2520]: E0317 18:52:51.045515 2520 kuberuntime_manager.go:1341] "Unhandled Error" err=< Mar 17 18:52:51.045581 kubelet[2520]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 18:52:51.045581 kubelet[2520]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 18:52:51.045581 kubelet[2520]: rm /hostbin/cilium-mount Mar 17 18:52:51.045581 kubelet[2520]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dr9rk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 18:52:51.045581 kubelet[2520]: > logger="UnhandledError" Mar 17 18:52:51.047118 kubelet[2520]: E0317 18:52:51.047077 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:52:51.645606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06-rootfs.mount: Deactivated successfully. Mar 17 18:52:51.778622 kubelet[2520]: I0317 18:52:51.778594 2520 scope.go:117] "RemoveContainer" containerID="fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2" Mar 17 18:52:51.779116 kubelet[2520]: I0317 18:52:51.779090 2520 scope.go:117] "RemoveContainer" containerID="fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2" Mar 17 18:52:51.780675 env[1471]: time="2025-03-17T18:52:51.780589560Z" level=info msg="RemoveContainer for \"fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2\"" Mar 17 18:52:51.781540 env[1471]: time="2025-03-17T18:52:51.781497249Z" level=info msg="RemoveContainer for \"fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2\"" Mar 17 18:52:51.781708 env[1471]: time="2025-03-17T18:52:51.781602183Z" level=error msg="RemoveContainer for \"fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2\" failed" error="failed to set removing state for container \"fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2\": container is already in removing state" Mar 17 18:52:51.781965 kubelet[2520]: E0317 18:52:51.781938 2520 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2\": container is already in removing state" containerID="fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2" Mar 17 18:52:51.782102 kubelet[2520]: E0317 18:52:51.782081 2520 kuberuntime_container.go:897] "Unhandled Error" err="failed to remove pod init container \"mount-cgroup\": rpc error: code = Unknown desc = failed to set removing state for container \"fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2\": container is already in removing state; Skipping pod \"cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" logger="UnhandledError" Mar 17 18:52:51.783499 kubelet[2520]: E0317 18:52:51.783471 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:52:51.794355 env[1471]: time="2025-03-17T18:52:51.794309300Z" level=info msg="RemoveContainer for \"fbf5edfa96ff9844bf97a1ff26fdf309e39d805f0b85dac9b60f06cebb4bd4d2\" returns successfully" Mar 17 18:52:51.804649 kubelet[2520]: I0317 18:52:51.804580 2520 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xdlv4" podStartSLOduration=13.768334712 podStartE2EDuration="46.804552788s" podCreationTimestamp="2025-03-17 18:52:05 +0000 UTC" firstStartedPulling="2025-03-17 18:52:06.977036752 +0000 UTC m=+8.499133087" lastFinishedPulling="2025-03-17 18:52:40.013254828 +0000 UTC m=+41.535351163" observedRunningTime="2025-03-17 18:52:40.840861659 +0000 UTC m=+42.362957994" watchObservedRunningTime="2025-03-17 18:52:51.804552788 +0000 UTC m=+53.326649123" Mar 17 18:52:54.136688 kubelet[2520]: W0317 18:52:54.136651 2520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod217492c1_c939_4ee7_9e07_a9bb84d6162e.slice/cri-containerd-b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06.scope WatchSource:0}: task b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06 not found: not found Mar 17 18:53:06.620136 kubelet[2520]: E0317 18:53:06.620097 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:53:20.624085 env[1471]: time="2025-03-17T18:53:20.624007731Z" level=info msg="CreateContainer within sandbox \"9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Mar 17 18:53:20.662491 env[1471]: time="2025-03-17T18:53:20.662365424Z" level=info msg="CreateContainer within sandbox \"9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229\"" Mar 17 18:53:20.663099 env[1471]: time="2025-03-17T18:53:20.663065972Z" level=info msg="StartContainer for \"5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229\"" Mar 17 18:53:20.686830 systemd[1]: Started cri-containerd-5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229.scope. Mar 17 18:53:20.699667 systemd[1]: cri-containerd-5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229.scope: Deactivated successfully. Mar 17 18:53:20.723448 env[1471]: time="2025-03-17T18:53:20.723396740Z" level=info msg="shim disconnected" id=5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229 Mar 17 18:53:20.723775 env[1471]: time="2025-03-17T18:53:20.723752054Z" level=warning msg="cleaning up after shim disconnected" id=5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229 namespace=k8s.io Mar 17 18:53:20.723846 env[1471]: time="2025-03-17T18:53:20.723833342Z" level=info msg="cleaning up dead shim" Mar 17 18:53:20.733532 env[1471]: time="2025-03-17T18:53:20.733482351Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:53:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3044 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T18:53:20Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 18:53:20.734018 env[1471]: time="2025-03-17T18:53:20.733960277Z" level=error msg="copy shim log" error="read /proc/self/fd/85: file already closed" Mar 17 18:53:20.734217 env[1471]: time="2025-03-17T18:53:20.734177218Z" level=error msg="Failed to pipe stdout of container \"5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229\"" error="reading from a closed fifo" Mar 17 18:53:20.734288 env[1471]: time="2025-03-17T18:53:20.734261226Z" level=error msg="Failed to pipe stderr of container \"5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229\"" error="reading from a closed fifo" Mar 17 18:53:20.739698 env[1471]: time="2025-03-17T18:53:20.739595940Z" level=error msg="StartContainer for \"5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 18:53:20.740368 kubelet[2520]: E0317 18:53:20.739913 2520 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229" Mar 17 18:53:20.740368 kubelet[2520]: E0317 18:53:20.740030 2520 kuberuntime_manager.go:1341] "Unhandled Error" err=< Mar 17 18:53:20.740368 kubelet[2520]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 18:53:20.740368 kubelet[2520]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 18:53:20.740368 kubelet[2520]: rm /hostbin/cilium-mount Mar 17 18:53:20.740368 kubelet[2520]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dr9rk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 18:53:20.740368 kubelet[2520]: > logger="UnhandledError" Mar 17 18:53:20.742882 kubelet[2520]: E0317 18:53:20.742504 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:53:20.829354 kubelet[2520]: I0317 18:53:20.829304 2520 scope.go:117] "RemoveContainer" containerID="b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06" Mar 17 18:53:20.832077 kubelet[2520]: I0317 18:53:20.831718 2520 scope.go:117] "RemoveContainer" containerID="b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06" Mar 17 18:53:20.842052 env[1471]: time="2025-03-17T18:53:20.842011520Z" level=info msg="RemoveContainer for \"b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06\"" Mar 17 18:53:20.843255 env[1471]: time="2025-03-17T18:53:20.843224557Z" level=info msg="RemoveContainer for \"b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06\"" Mar 17 18:53:20.843537 env[1471]: time="2025-03-17T18:53:20.843507304Z" level=error msg="RemoveContainer for \"b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06\" failed" error="failed to set removing state for container \"b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06\": container is already in removing state" Mar 17 18:53:20.843881 kubelet[2520]: E0317 18:53:20.843855 2520 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06\": container is already in removing state" containerID="b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06" Mar 17 18:53:20.844033 kubelet[2520]: E0317 18:53:20.844012 2520 kuberuntime_container.go:897] "Unhandled Error" err="failed to remove pod init container \"mount-cgroup\": rpc error: code = Unknown desc = failed to set removing state for container \"b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06\": container is already in removing state; Skipping pod \"cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" logger="UnhandledError" Mar 17 18:53:20.845397 kubelet[2520]: E0317 18:53:20.845367 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:53:20.860799 env[1471]: time="2025-03-17T18:53:20.860750364Z" level=info msg="RemoveContainer for \"b5dd038ad2448da631a9d5521811b13a5e484f9f8a871b8d9796bb9063377e06\" returns successfully" Mar 17 18:53:21.644821 systemd[1]: run-containerd-runc-k8s.io-5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229-runc.laC8cB.mount: Deactivated successfully. Mar 17 18:53:21.644912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229-rootfs.mount: Deactivated successfully. Mar 17 18:53:23.829462 kubelet[2520]: W0317 18:53:23.829427 2520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod217492c1_c939_4ee7_9e07_a9bb84d6162e.slice/cri-containerd-5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229.scope WatchSource:0}: task 5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229 not found: not found Mar 17 18:53:35.620413 kubelet[2520]: E0317 18:53:35.620378 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:53:47.620619 kubelet[2520]: E0317 18:53:47.620536 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:53:58.668457 kubelet[2520]: E0317 18:53:58.668423 2520 kubelet_node_status.go:461] "Node not becoming ready in time after startup" Mar 17 18:53:58.725169 kubelet[2520]: E0317 18:53:58.725124 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:54:00.621771 kubelet[2520]: E0317 18:54:00.621490 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:54:03.726852 kubelet[2520]: E0317 18:54:03.726740 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:54:08.728092 kubelet[2520]: E0317 18:54:08.728055 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:54:12.623316 env[1471]: time="2025-03-17T18:54:12.623171706Z" level=info msg="CreateContainer within sandbox \"9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Mar 17 18:54:12.645139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3178178005.mount: Deactivated successfully. Mar 17 18:54:12.650198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount328994854.mount: Deactivated successfully. Mar 17 18:54:12.664082 env[1471]: time="2025-03-17T18:54:12.663917853Z" level=info msg="CreateContainer within sandbox \"9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d\"" Mar 17 18:54:12.665534 env[1471]: time="2025-03-17T18:54:12.664617890Z" level=info msg="StartContainer for \"7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d\"" Mar 17 18:54:12.683248 systemd[1]: Started cri-containerd-7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d.scope. Mar 17 18:54:12.694269 systemd[1]: cri-containerd-7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d.scope: Deactivated successfully. Mar 17 18:54:12.725331 env[1471]: time="2025-03-17T18:54:12.725277407Z" level=info msg="shim disconnected" id=7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d Mar 17 18:54:12.725331 env[1471]: time="2025-03-17T18:54:12.725332810Z" level=warning msg="cleaning up after shim disconnected" id=7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d namespace=k8s.io Mar 17 18:54:12.725575 env[1471]: time="2025-03-17T18:54:12.725342570Z" level=info msg="cleaning up dead shim" Mar 17 18:54:12.732653 env[1471]: time="2025-03-17T18:54:12.732573911Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3088 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T18:54:12Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 18:54:12.732923 env[1471]: time="2025-03-17T18:54:12.732863967Z" level=error msg="copy shim log" error="read /proc/self/fd/85: file already closed" Mar 17 18:54:12.733698 env[1471]: time="2025-03-17T18:54:12.733657729Z" level=error msg="Failed to pipe stdout of container \"7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d\"" error="reading from a closed fifo" Mar 17 18:54:12.733771 env[1471]: time="2025-03-17T18:54:12.733740333Z" level=error msg="Failed to pipe stderr of container \"7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d\"" error="reading from a closed fifo" Mar 17 18:54:12.737875 env[1471]: time="2025-03-17T18:54:12.737805067Z" level=error msg="StartContainer for \"7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 18:54:12.739288 kubelet[2520]: E0317 18:54:12.738064 2520 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d" Mar 17 18:54:12.739288 kubelet[2520]: E0317 18:54:12.738179 2520 kuberuntime_manager.go:1341] "Unhandled Error" err=< Mar 17 18:54:12.739288 kubelet[2520]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 18:54:12.739288 kubelet[2520]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 18:54:12.739288 kubelet[2520]: rm /hostbin/cilium-mount Mar 17 18:54:12.739288 kubelet[2520]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dr9rk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 18:54:12.739288 kubelet[2520]: > logger="UnhandledError" Mar 17 18:54:12.739781 kubelet[2520]: E0317 18:54:12.739739 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:54:12.922318 kubelet[2520]: I0317 18:54:12.922220 2520 scope.go:117] "RemoveContainer" containerID="5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229" Mar 17 18:54:12.923547 kubelet[2520]: I0317 18:54:12.923525 2520 scope.go:117] "RemoveContainer" containerID="5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229" Mar 17 18:54:12.924812 env[1471]: time="2025-03-17T18:54:12.924778201Z" level=info msg="RemoveContainer for \"5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229\"" Mar 17 18:54:12.925206 env[1471]: time="2025-03-17T18:54:12.925006733Z" level=info msg="RemoveContainer for \"5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229\"" Mar 17 18:54:12.925378 env[1471]: time="2025-03-17T18:54:12.925341390Z" level=error msg="RemoveContainer for \"5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229\" failed" error="failed to set removing state for container \"5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229\": container is already in removing state" Mar 17 18:54:12.925554 kubelet[2520]: E0317 18:54:12.925525 2520 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229\": container is already in removing state" containerID="5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229" Mar 17 18:54:12.925621 kubelet[2520]: E0317 18:54:12.925577 2520 kuberuntime_container.go:897] "Unhandled Error" err="failed to remove pod init container \"mount-cgroup\": rpc error: code = Unknown desc = failed to set removing state for container \"5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229\": container is already in removing state; Skipping pod \"cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" logger="UnhandledError" Mar 17 18:54:12.926884 kubelet[2520]: E0317 18:54:12.926853 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:54:12.933420 env[1471]: time="2025-03-17T18:54:12.933380414Z" level=info msg="RemoveContainer for \"5c38c5f0b9b2cdb3da1a10658d3274d9b60e9d2a3db1788144f9ac829f2a3229\" returns successfully" Mar 17 18:54:13.642160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d-rootfs.mount: Deactivated successfully. Mar 17 18:54:13.729777 kubelet[2520]: E0317 18:54:13.729732 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:54:15.830380 kubelet[2520]: W0317 18:54:15.830334 2520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod217492c1_c939_4ee7_9e07_a9bb84d6162e.slice/cri-containerd-7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d.scope WatchSource:0}: task 7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d not found: not found Mar 17 18:54:18.730391 kubelet[2520]: E0317 18:54:18.730346 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:54:23.731532 kubelet[2520]: E0317 18:54:23.731486 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:54:26.621138 kubelet[2520]: E0317 18:54:26.621088 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:54:28.732861 kubelet[2520]: E0317 18:54:28.732823 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:54:33.733907 kubelet[2520]: E0317 18:54:33.733868 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:54:37.619940 kubelet[2520]: E0317 18:54:37.619896 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:54:38.735013 kubelet[2520]: E0317 18:54:38.734968 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:54:43.736305 kubelet[2520]: E0317 18:54:43.736242 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:54:48.736869 kubelet[2520]: E0317 18:54:48.736828 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:54:51.620196 kubelet[2520]: E0317 18:54:51.620159 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:54:53.738802 kubelet[2520]: E0317 18:54:53.738760 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:54:58.740295 kubelet[2520]: E0317 18:54:58.740255 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:55:03.741506 kubelet[2520]: E0317 18:55:03.741464 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:55:06.621923 kubelet[2520]: E0317 18:55:06.621748 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:55:08.743273 kubelet[2520]: E0317 18:55:08.743159 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:55:13.744473 kubelet[2520]: E0317 18:55:13.744399 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:55:18.745666 kubelet[2520]: E0317 18:55:18.745538 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:55:19.620286 kubelet[2520]: E0317 18:55:19.620242 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:55:23.746853 kubelet[2520]: E0317 18:55:23.746758 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:55:28.747757 kubelet[2520]: E0317 18:55:28.747719 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:55:30.620692 kubelet[2520]: E0317 18:55:30.620650 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:55:33.749286 kubelet[2520]: E0317 18:55:33.749250 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:55:38.750646 kubelet[2520]: E0317 18:55:38.750595 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:55:43.752704 kubelet[2520]: E0317 18:55:43.752663 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:55:45.622284 env[1471]: time="2025-03-17T18:55:45.622056818Z" level=info msg="CreateContainer within sandbox \"9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Mar 17 18:55:45.646100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4168375593.mount: Deactivated successfully. Mar 17 18:55:45.662678 env[1471]: time="2025-03-17T18:55:45.662608844Z" level=info msg="CreateContainer within sandbox \"9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"faf61b4a1675dd0efcf302d21256380217c7953f1c5dfbc3e8c144dd49ed1f4e\"" Mar 17 18:55:45.663380 env[1471]: time="2025-03-17T18:55:45.663347824Z" level=info msg="StartContainer for \"faf61b4a1675dd0efcf302d21256380217c7953f1c5dfbc3e8c144dd49ed1f4e\"" Mar 17 18:55:45.683713 systemd[1]: Started cri-containerd-faf61b4a1675dd0efcf302d21256380217c7953f1c5dfbc3e8c144dd49ed1f4e.scope. Mar 17 18:55:45.694069 systemd[1]: cri-containerd-faf61b4a1675dd0efcf302d21256380217c7953f1c5dfbc3e8c144dd49ed1f4e.scope: Deactivated successfully. Mar 17 18:55:45.694345 systemd[1]: Stopped cri-containerd-faf61b4a1675dd0efcf302d21256380217c7953f1c5dfbc3e8c144dd49ed1f4e.scope. Mar 17 18:55:45.715138 env[1471]: time="2025-03-17T18:55:45.715082583Z" level=info msg="shim disconnected" id=faf61b4a1675dd0efcf302d21256380217c7953f1c5dfbc3e8c144dd49ed1f4e Mar 17 18:55:45.715138 env[1471]: time="2025-03-17T18:55:45.715136064Z" level=warning msg="cleaning up after shim disconnected" id=faf61b4a1675dd0efcf302d21256380217c7953f1c5dfbc3e8c144dd49ed1f4e namespace=k8s.io Mar 17 18:55:45.715138 env[1471]: time="2025-03-17T18:55:45.715145985Z" level=info msg="cleaning up dead shim" Mar 17 18:55:45.724570 env[1471]: time="2025-03-17T18:55:45.724513551Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:55:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3142 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T18:55:45Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/faf61b4a1675dd0efcf302d21256380217c7953f1c5dfbc3e8c144dd49ed1f4e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 18:55:45.724876 env[1471]: time="2025-03-17T18:55:45.724811839Z" level=error msg="copy shim log" error="read /proc/self/fd/85: file already closed" Mar 17 18:55:45.725083 env[1471]: time="2025-03-17T18:55:45.725053165Z" level=error msg="Failed to pipe stdout of container \"faf61b4a1675dd0efcf302d21256380217c7953f1c5dfbc3e8c144dd49ed1f4e\"" error="reading from a closed fifo" Mar 17 18:55:45.726726 env[1471]: time="2025-03-17T18:55:45.726688888Z" level=error msg="Failed to pipe stderr of container \"faf61b4a1675dd0efcf302d21256380217c7953f1c5dfbc3e8c144dd49ed1f4e\"" error="reading from a closed fifo" Mar 17 18:55:45.735303 env[1471]: time="2025-03-17T18:55:45.735236153Z" level=error msg="StartContainer for \"faf61b4a1675dd0efcf302d21256380217c7953f1c5dfbc3e8c144dd49ed1f4e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 18:55:45.740447 kubelet[2520]: E0317 18:55:45.739943 2520 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="faf61b4a1675dd0efcf302d21256380217c7953f1c5dfbc3e8c144dd49ed1f4e" Mar 17 18:55:45.740447 kubelet[2520]: E0317 18:55:45.740089 2520 kuberuntime_manager.go:1341] "Unhandled Error" err=< Mar 17 18:55:45.740447 kubelet[2520]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 18:55:45.740447 kubelet[2520]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 18:55:45.740447 kubelet[2520]: rm /hostbin/cilium-mount Mar 17 18:55:45.740447 kubelet[2520]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dr9rk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 18:55:45.740447 kubelet[2520]: > logger="UnhandledError" Mar 17 18:55:45.742718 kubelet[2520]: E0317 18:55:45.741952 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:55:46.081305 kubelet[2520]: I0317 18:55:46.081273 2520 scope.go:117] "RemoveContainer" containerID="7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d" Mar 17 18:55:46.082855 kubelet[2520]: I0317 18:55:46.081621 2520 scope.go:117] "RemoveContainer" containerID="7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d" Mar 17 18:55:46.082982 env[1471]: time="2025-03-17T18:55:46.082797077Z" level=info msg="RemoveContainer for \"7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d\"" Mar 17 18:55:46.087800 env[1471]: time="2025-03-17T18:55:46.087765567Z" level=info msg="RemoveContainer for \"7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d\"" Mar 17 18:55:46.088158 env[1471]: time="2025-03-17T18:55:46.088129097Z" level=error msg="RemoveContainer for \"7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d\" failed" error="failed to set removing state for container \"7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d\": container is already in removing state" Mar 17 18:55:46.088433 kubelet[2520]: E0317 18:55:46.088352 2520 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d\": container is already in removing state" containerID="7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d" Mar 17 18:55:46.088433 kubelet[2520]: E0317 18:55:46.088407 2520 kuberuntime_container.go:897] "Unhandled Error" err="failed to remove pod init container \"mount-cgroup\": rpc error: code = Unknown desc = failed to set removing state for container \"7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d\": container is already in removing state; Skipping pod \"cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" logger="UnhandledError" Mar 17 18:55:46.090031 kubelet[2520]: E0317 18:55:46.089686 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:55:46.093594 env[1471]: time="2025-03-17T18:55:46.093558199Z" level=info msg="RemoveContainer for \"7359ad0acfbea8c0dc5f6a7b1c3a3565c930e2f1ad331849973c8ec45546844d\" returns successfully" Mar 17 18:55:46.643359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-faf61b4a1675dd0efcf302d21256380217c7953f1c5dfbc3e8c144dd49ed1f4e-rootfs.mount: Deactivated successfully. Mar 17 18:55:48.753953 kubelet[2520]: E0317 18:55:48.753909 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:55:48.820155 kubelet[2520]: W0317 18:55:48.820111 2520 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod217492c1_c939_4ee7_9e07_a9bb84d6162e.slice/cri-containerd-faf61b4a1675dd0efcf302d21256380217c7953f1c5dfbc3e8c144dd49ed1f4e.scope WatchSource:0}: task faf61b4a1675dd0efcf302d21256380217c7953f1c5dfbc3e8c144dd49ed1f4e not found: not found Mar 17 18:55:53.754920 kubelet[2520]: E0317 18:55:53.754886 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:55:57.620473 kubelet[2520]: E0317 18:55:57.620435 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:55:58.756777 kubelet[2520]: E0317 18:55:58.756738 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:56:03.757718 kubelet[2520]: E0317 18:56:03.757681 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:56:08.620945 kubelet[2520]: E0317 18:56:08.620904 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:56:08.758976 kubelet[2520]: E0317 18:56:08.758942 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:56:13.760567 kubelet[2520]: E0317 18:56:13.760509 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:56:18.761196 kubelet[2520]: E0317 18:56:18.761149 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:56:20.619967 kubelet[2520]: E0317 18:56:20.619927 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:56:23.761850 kubelet[2520]: E0317 18:56:23.761820 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:56:28.763245 kubelet[2520]: E0317 18:56:28.763206 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:56:32.626790 kubelet[2520]: E0317 18:56:32.626752 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:56:33.763929 kubelet[2520]: E0317 18:56:33.763894 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:56:38.765467 kubelet[2520]: E0317 18:56:38.765385 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:56:43.620107 kubelet[2520]: E0317 18:56:43.620064 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:56:43.766900 kubelet[2520]: E0317 18:56:43.766869 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:56:48.768005 kubelet[2520]: E0317 18:56:48.767969 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:56:53.769768 kubelet[2520]: E0317 18:56:53.769713 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:56:56.620917 kubelet[2520]: E0317 18:56:56.620880 2520 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-jg28l_kube-system(217492c1-c939-4ee7-9e07-a9bb84d6162e)\"" pod="kube-system/cilium-jg28l" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" Mar 17 18:56:58.771168 kubelet[2520]: E0317 18:56:58.771134 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:57:03.772593 kubelet[2520]: E0317 18:57:03.772544 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:57:06.732058 env[1471]: time="2025-03-17T18:57:06.731850673Z" level=info msg="StopPodSandbox for \"9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e\"" Mar 17 18:57:06.732058 env[1471]: time="2025-03-17T18:57:06.731931555Z" level=info msg="Container to stop \"faf61b4a1675dd0efcf302d21256380217c7953f1c5dfbc3e8c144dd49ed1f4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:57:06.734013 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e-shm.mount: Deactivated successfully. Mar 17 18:57:06.743021 env[1471]: time="2025-03-17T18:57:06.742983302Z" level=info msg="StopContainer for \"f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0\" with timeout 30 (s)" Mar 17 18:57:06.743605 env[1471]: time="2025-03-17T18:57:06.743569514Z" level=info msg="Stop container \"f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0\" with signal terminated" Mar 17 18:57:06.746034 systemd[1]: cri-containerd-9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e.scope: Deactivated successfully. Mar 17 18:57:06.774809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e-rootfs.mount: Deactivated successfully. Mar 17 18:57:06.791172 sudo[1872]: pam_unix(sudo:session): session closed for user root Mar 17 18:57:06.796251 env[1471]: time="2025-03-17T18:57:06.796198836Z" level=info msg="shim disconnected" id=9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e Mar 17 18:57:06.796992 env[1471]: time="2025-03-17T18:57:06.796962211Z" level=warning msg="cleaning up after shim disconnected" id=9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e namespace=k8s.io Mar 17 18:57:06.797137 env[1471]: time="2025-03-17T18:57:06.797123455Z" level=info msg="cleaning up dead shim" Mar 17 18:57:06.806466 env[1471]: time="2025-03-17T18:57:06.806413605Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:57:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3187 runtime=io.containerd.runc.v2\n" Mar 17 18:57:06.806984 env[1471]: time="2025-03-17T18:57:06.806951577Z" level=info msg="TearDown network for sandbox \"9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e\" successfully" Mar 17 18:57:06.807090 env[1471]: time="2025-03-17T18:57:06.807073019Z" level=info msg="StopPodSandbox for \"9d1e94f10aaa8b2e7ed4f6e27a46251fcde9a214bf3a1346a433cf62c215186e\" returns successfully" Mar 17 18:57:06.808090 systemd[1]: cri-containerd-f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0.scope: Deactivated successfully. Mar 17 18:57:06.839676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0-rootfs.mount: Deactivated successfully. Mar 17 18:57:06.866357 env[1471]: time="2025-03-17T18:57:06.866301356Z" level=info msg="shim disconnected" id=f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0 Mar 17 18:57:06.866599 env[1471]: time="2025-03-17T18:57:06.866359998Z" level=warning msg="cleaning up after shim disconnected" id=f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0 namespace=k8s.io Mar 17 18:57:06.866599 env[1471]: time="2025-03-17T18:57:06.866372718Z" level=info msg="cleaning up dead shim" Mar 17 18:57:06.874603 env[1471]: time="2025-03-17T18:57:06.874548326Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:57:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3214 runtime=io.containerd.runc.v2\n" Mar 17 18:57:06.879121 env[1471]: time="2025-03-17T18:57:06.879052618Z" level=info msg="StopContainer for \"f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0\" returns successfully" Mar 17 18:57:06.881034 env[1471]: time="2025-03-17T18:57:06.880110440Z" level=info msg="StopPodSandbox for \"a1bc2fa55c98ad5e84c310a3354e2fd847fbdfed97293dc402aade3428bfd3bd\"" Mar 17 18:57:06.881034 env[1471]: time="2025-03-17T18:57:06.880176562Z" level=info msg="Container to stop \"f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:57:06.879834 sshd[1869]: pam_unix(sshd:session): session closed for user core Mar 17 18:57:06.881882 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1bc2fa55c98ad5e84c310a3354e2fd847fbdfed97293dc402aade3428bfd3bd-shm.mount: Deactivated successfully. Mar 17 18:57:06.886322 systemd[1]: sshd@4-10.200.20.41:22-10.200.16.10:55474.service: Deactivated successfully. Mar 17 18:57:06.887066 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:57:06.887231 systemd[1]: session-7.scope: Consumed 8.691s CPU time. Mar 17 18:57:06.888130 systemd-logind[1461]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:57:06.888963 systemd-logind[1461]: Removed session 7. Mar 17 18:57:06.898168 systemd[1]: cri-containerd-a1bc2fa55c98ad5e84c310a3354e2fd847fbdfed97293dc402aade3428bfd3bd.scope: Deactivated successfully. Mar 17 18:57:06.917521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1bc2fa55c98ad5e84c310a3354e2fd847fbdfed97293dc402aade3428bfd3bd-rootfs.mount: Deactivated successfully. Mar 17 18:57:06.936105 env[1471]: time="2025-03-17T18:57:06.936058710Z" level=info msg="shim disconnected" id=a1bc2fa55c98ad5e84c310a3354e2fd847fbdfed97293dc402aade3428bfd3bd Mar 17 18:57:06.936972 kubelet[2520]: I0317 18:57:06.936463 2520 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-bpf-maps\") pod \"217492c1-c939-4ee7-9e07-a9bb84d6162e\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " Mar 17 18:57:06.936972 kubelet[2520]: I0317 18:57:06.936500 2520 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-cni-path\") pod \"217492c1-c939-4ee7-9e07-a9bb84d6162e\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " Mar 17 18:57:06.936972 kubelet[2520]: I0317 18:57:06.936525 2520 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/217492c1-c939-4ee7-9e07-a9bb84d6162e-cilium-config-path\") pod \"217492c1-c939-4ee7-9e07-a9bb84d6162e\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " Mar 17 18:57:06.936972 kubelet[2520]: I0317 18:57:06.936542 2520 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-cilium-run\") pod \"217492c1-c939-4ee7-9e07-a9bb84d6162e\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " Mar 17 18:57:06.936972 kubelet[2520]: I0317 18:57:06.936563 2520 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-host-proc-sys-kernel\") pod \"217492c1-c939-4ee7-9e07-a9bb84d6162e\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " Mar 17 18:57:06.936972 kubelet[2520]: I0317 18:57:06.936569 2520 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "217492c1-c939-4ee7-9e07-a9bb84d6162e" (UID: "217492c1-c939-4ee7-9e07-a9bb84d6162e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:57:06.936972 kubelet[2520]: I0317 18:57:06.936606 2520 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-hostproc" (OuterVolumeSpecName: "hostproc") pod "217492c1-c939-4ee7-9e07-a9bb84d6162e" (UID: "217492c1-c939-4ee7-9e07-a9bb84d6162e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:57:06.936972 kubelet[2520]: I0317 18:57:06.936583 2520 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-hostproc\") pod \"217492c1-c939-4ee7-9e07-a9bb84d6162e\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " Mar 17 18:57:06.936972 kubelet[2520]: I0317 18:57:06.936681 2520 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-cilium-cgroup\") pod \"217492c1-c939-4ee7-9e07-a9bb84d6162e\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " Mar 17 18:57:06.936972 kubelet[2520]: I0317 18:57:06.936705 2520 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-xtables-lock\") pod \"217492c1-c939-4ee7-9e07-a9bb84d6162e\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " Mar 17 18:57:06.936972 kubelet[2520]: I0317 18:57:06.936736 2520 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/217492c1-c939-4ee7-9e07-a9bb84d6162e-clustermesh-secrets\") pod \"217492c1-c939-4ee7-9e07-a9bb84d6162e\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " Mar 17 18:57:06.936972 kubelet[2520]: I0317 18:57:06.936756 2520 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr9rk\" (UniqueName: \"kubernetes.io/projected/217492c1-c939-4ee7-9e07-a9bb84d6162e-kube-api-access-dr9rk\") pod \"217492c1-c939-4ee7-9e07-a9bb84d6162e\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " Mar 17 18:57:06.936972 kubelet[2520]: I0317 18:57:06.936776 2520 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-etc-cni-netd\") pod \"217492c1-c939-4ee7-9e07-a9bb84d6162e\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " Mar 17 18:57:06.936972 kubelet[2520]: I0317 18:57:06.936790 2520 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-host-proc-sys-net\") pod \"217492c1-c939-4ee7-9e07-a9bb84d6162e\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " Mar 17 18:57:06.936972 kubelet[2520]: I0317 18:57:06.936816 2520 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/217492c1-c939-4ee7-9e07-a9bb84d6162e-hubble-tls\") pod \"217492c1-c939-4ee7-9e07-a9bb84d6162e\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " Mar 17 18:57:06.936972 kubelet[2520]: I0317 18:57:06.936833 2520 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-lib-modules\") pod \"217492c1-c939-4ee7-9e07-a9bb84d6162e\" (UID: \"217492c1-c939-4ee7-9e07-a9bb84d6162e\") " Mar 17 18:57:06.937644 kubelet[2520]: I0317 18:57:06.936901 2520 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-hostproc\") on node \"ci-3510.3.7-a-2552a29e1b\" DevicePath \"\"" Mar 17 18:57:06.937644 kubelet[2520]: I0317 18:57:06.936912 2520 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-bpf-maps\") on node \"ci-3510.3.7-a-2552a29e1b\" DevicePath \"\"" Mar 17 18:57:06.937644 kubelet[2520]: I0317 18:57:06.936624 2520 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-cni-path" (OuterVolumeSpecName: "cni-path") pod "217492c1-c939-4ee7-9e07-a9bb84d6162e" (UID: "217492c1-c939-4ee7-9e07-a9bb84d6162e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:57:06.937644 kubelet[2520]: I0317 18:57:06.936935 2520 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "217492c1-c939-4ee7-9e07-a9bb84d6162e" (UID: "217492c1-c939-4ee7-9e07-a9bb84d6162e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:57:06.937644 kubelet[2520]: I0317 18:57:06.936968 2520 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "217492c1-c939-4ee7-9e07-a9bb84d6162e" (UID: "217492c1-c939-4ee7-9e07-a9bb84d6162e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:57:06.937644 kubelet[2520]: I0317 18:57:06.936982 2520 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "217492c1-c939-4ee7-9e07-a9bb84d6162e" (UID: "217492c1-c939-4ee7-9e07-a9bb84d6162e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:57:06.938310 kubelet[2520]: I0317 18:57:06.937898 2520 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "217492c1-c939-4ee7-9e07-a9bb84d6162e" (UID: "217492c1-c939-4ee7-9e07-a9bb84d6162e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:57:06.938310 kubelet[2520]: I0317 18:57:06.937952 2520 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "217492c1-c939-4ee7-9e07-a9bb84d6162e" (UID: "217492c1-c939-4ee7-9e07-a9bb84d6162e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:57:06.938310 kubelet[2520]: I0317 18:57:06.937972 2520 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "217492c1-c939-4ee7-9e07-a9bb84d6162e" (UID: "217492c1-c939-4ee7-9e07-a9bb84d6162e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:57:06.938310 kubelet[2520]: I0317 18:57:06.937987 2520 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "217492c1-c939-4ee7-9e07-a9bb84d6162e" (UID: "217492c1-c939-4ee7-9e07-a9bb84d6162e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:57:06.938713 env[1471]: time="2025-03-17T18:57:06.938679124Z" level=warning msg="cleaning up after shim disconnected" id=a1bc2fa55c98ad5e84c310a3354e2fd847fbdfed97293dc402aade3428bfd3bd namespace=k8s.io Mar 17 18:57:06.938819 env[1471]: time="2025-03-17T18:57:06.938805247Z" level=info msg="cleaning up dead shim" Mar 17 18:57:06.941833 kubelet[2520]: I0317 18:57:06.941790 2520 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/217492c1-c939-4ee7-9e07-a9bb84d6162e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "217492c1-c939-4ee7-9e07-a9bb84d6162e" (UID: "217492c1-c939-4ee7-9e07-a9bb84d6162e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 18:57:06.944744 kubelet[2520]: I0317 18:57:06.944697 2520 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/217492c1-c939-4ee7-9e07-a9bb84d6162e-kube-api-access-dr9rk" (OuterVolumeSpecName: "kube-api-access-dr9rk") pod "217492c1-c939-4ee7-9e07-a9bb84d6162e" (UID: "217492c1-c939-4ee7-9e07-a9bb84d6162e"). InnerVolumeSpecName "kube-api-access-dr9rk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 18:57:06.945125 kubelet[2520]: I0317 18:57:06.945091 2520 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/217492c1-c939-4ee7-9e07-a9bb84d6162e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "217492c1-c939-4ee7-9e07-a9bb84d6162e" (UID: "217492c1-c939-4ee7-9e07-a9bb84d6162e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 18:57:06.946678 kubelet[2520]: I0317 18:57:06.946646 2520 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/217492c1-c939-4ee7-9e07-a9bb84d6162e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "217492c1-c939-4ee7-9e07-a9bb84d6162e" (UID: "217492c1-c939-4ee7-9e07-a9bb84d6162e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 18:57:06.950366 env[1471]: time="2025-03-17T18:57:06.950311643Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:57:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3251 runtime=io.containerd.runc.v2\n" Mar 17 18:57:06.950894 env[1471]: time="2025-03-17T18:57:06.950861974Z" level=info msg="TearDown network for sandbox \"a1bc2fa55c98ad5e84c310a3354e2fd847fbdfed97293dc402aade3428bfd3bd\" successfully" Mar 17 18:57:06.951010 env[1471]: time="2025-03-17T18:57:06.950991777Z" level=info msg="StopPodSandbox for \"a1bc2fa55c98ad5e84c310a3354e2fd847fbdfed97293dc402aade3428bfd3bd\" returns successfully" Mar 17 18:57:07.040030 kubelet[2520]: I0317 18:57:07.037975 2520 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjm6z\" (UniqueName: \"kubernetes.io/projected/ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4-kube-api-access-qjm6z\") pod \"ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4\" (UID: \"ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4\") " Mar 17 18:57:07.040276 kubelet[2520]: I0317 18:57:07.040252 2520 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4-cilium-config-path\") pod \"ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4\" (UID: \"ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4\") " Mar 17 18:57:07.040417 kubelet[2520]: I0317 18:57:07.040402 2520 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-cni-path\") on node \"ci-3510.3.7-a-2552a29e1b\" DevicePath \"\"" Mar 17 18:57:07.040501 kubelet[2520]: I0317 18:57:07.040485 2520 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/217492c1-c939-4ee7-9e07-a9bb84d6162e-cilium-config-path\") on node \"ci-3510.3.7-a-2552a29e1b\" DevicePath \"\"" Mar 17 18:57:07.040585 kubelet[2520]: I0317 18:57:07.040574 2520 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-cilium-run\") on node \"ci-3510.3.7-a-2552a29e1b\" DevicePath \"\"" Mar 17 18:57:07.040669 kubelet[2520]: I0317 18:57:07.040656 2520 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-host-proc-sys-kernel\") on node \"ci-3510.3.7-a-2552a29e1b\" DevicePath \"\"" Mar 17 18:57:07.040738 kubelet[2520]: I0317 18:57:07.040727 2520 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-cilium-cgroup\") on node \"ci-3510.3.7-a-2552a29e1b\" DevicePath \"\"" Mar 17 18:57:07.040797 kubelet[2520]: I0317 18:57:07.040787 2520 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-xtables-lock\") on node \"ci-3510.3.7-a-2552a29e1b\" DevicePath \"\"" Mar 17 18:57:07.040871 kubelet[2520]: I0317 18:57:07.040830 2520 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4-kube-api-access-qjm6z" (OuterVolumeSpecName: "kube-api-access-qjm6z") pod "ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4" (UID: "ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4"). InnerVolumeSpecName "kube-api-access-qjm6z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 18:57:07.040871 kubelet[2520]: I0317 18:57:07.040846 2520 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/217492c1-c939-4ee7-9e07-a9bb84d6162e-clustermesh-secrets\") on node \"ci-3510.3.7-a-2552a29e1b\" DevicePath \"\"" Mar 17 18:57:07.040933 kubelet[2520]: I0317 18:57:07.040890 2520 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dr9rk\" (UniqueName: \"kubernetes.io/projected/217492c1-c939-4ee7-9e07-a9bb84d6162e-kube-api-access-dr9rk\") on node \"ci-3510.3.7-a-2552a29e1b\" DevicePath \"\"" Mar 17 18:57:07.040933 kubelet[2520]: I0317 18:57:07.040904 2520 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/217492c1-c939-4ee7-9e07-a9bb84d6162e-hubble-tls\") on node \"ci-3510.3.7-a-2552a29e1b\" DevicePath \"\"" Mar 17 18:57:07.040933 kubelet[2520]: I0317 18:57:07.040915 2520 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-etc-cni-netd\") on node \"ci-3510.3.7-a-2552a29e1b\" DevicePath \"\"" Mar 17 18:57:07.040933 kubelet[2520]: I0317 18:57:07.040924 2520 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-host-proc-sys-net\") on node \"ci-3510.3.7-a-2552a29e1b\" DevicePath \"\"" Mar 17 18:57:07.040933 kubelet[2520]: I0317 18:57:07.040932 2520 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/217492c1-c939-4ee7-9e07-a9bb84d6162e-lib-modules\") on node \"ci-3510.3.7-a-2552a29e1b\" DevicePath \"\"" Mar 17 18:57:07.042877 kubelet[2520]: I0317 18:57:07.042845 2520 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4" (UID: "ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 18:57:07.141313 kubelet[2520]: I0317 18:57:07.141266 2520 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4-cilium-config-path\") on node \"ci-3510.3.7-a-2552a29e1b\" DevicePath \"\"" Mar 17 18:57:07.141313 kubelet[2520]: I0317 18:57:07.141316 2520 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qjm6z\" (UniqueName: \"kubernetes.io/projected/ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4-kube-api-access-qjm6z\") on node \"ci-3510.3.7-a-2552a29e1b\" DevicePath \"\"" Mar 17 18:57:07.213766 kubelet[2520]: I0317 18:57:07.213730 2520 scope.go:117] "RemoveContainer" containerID="f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0" Mar 17 18:57:07.217309 systemd[1]: Removed slice kubepods-besteffort-podba857b3f_fcb2_4377_bbe3_c2d24bfad0b4.slice. Mar 17 18:57:07.219263 env[1471]: time="2025-03-17T18:57:07.218504428Z" level=info msg="RemoveContainer for \"f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0\"" Mar 17 18:57:07.224248 systemd[1]: Removed slice kubepods-burstable-pod217492c1_c939_4ee7_9e07_a9bb84d6162e.slice. Mar 17 18:57:07.229819 env[1471]: time="2025-03-17T18:57:07.229767859Z" level=info msg="RemoveContainer for \"f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0\" returns successfully" Mar 17 18:57:07.230130 kubelet[2520]: I0317 18:57:07.230095 2520 scope.go:117] "RemoveContainer" containerID="f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0" Mar 17 18:57:07.230523 env[1471]: time="2025-03-17T18:57:07.230399232Z" level=error msg="ContainerStatus for \"f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0\": not found" Mar 17 18:57:07.230776 kubelet[2520]: E0317 18:57:07.230583 2520 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0\": not found" containerID="f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0" Mar 17 18:57:07.230776 kubelet[2520]: I0317 18:57:07.230614 2520 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0"} err="failed to get container status \"f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0\": rpc error: code = NotFound desc = an error occurred when try to find container \"f7179d75b79fab783ad1fb16d250a6099bfc389f4ddf16a457b0bf316c6d98c0\": not found" Mar 17 18:57:07.230776 kubelet[2520]: I0317 18:57:07.230678 2520 scope.go:117] "RemoveContainer" containerID="faf61b4a1675dd0efcf302d21256380217c7953f1c5dfbc3e8c144dd49ed1f4e" Mar 17 18:57:07.231883 env[1471]: time="2025-03-17T18:57:07.231849862Z" level=info msg="RemoveContainer for \"faf61b4a1675dd0efcf302d21256380217c7953f1c5dfbc3e8c144dd49ed1f4e\"" Mar 17 18:57:07.244521 env[1471]: time="2025-03-17T18:57:07.244475481Z" level=info msg="RemoveContainer for \"faf61b4a1675dd0efcf302d21256380217c7953f1c5dfbc3e8c144dd49ed1f4e\" returns successfully" Mar 17 18:57:07.733904 systemd[1]: var-lib-kubelet-pods-217492c1\x2dc939\x2d4ee7\x2d9e07\x2da9bb84d6162e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:57:07.733994 systemd[1]: var-lib-kubelet-pods-217492c1\x2dc939\x2d4ee7\x2d9e07\x2da9bb84d6162e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddr9rk.mount: Deactivated successfully. Mar 17 18:57:07.734045 systemd[1]: var-lib-kubelet-pods-217492c1\x2dc939\x2d4ee7\x2d9e07\x2da9bb84d6162e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:57:07.734092 systemd[1]: var-lib-kubelet-pods-ba857b3f\x2dfcb2\x2d4377\x2dbbe3\x2dc2d24bfad0b4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqjm6z.mount: Deactivated successfully. Mar 17 18:57:08.621752 kubelet[2520]: I0317 18:57:08.621719 2520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="217492c1-c939-4ee7-9e07-a9bb84d6162e" path="/var/lib/kubelet/pods/217492c1-c939-4ee7-9e07-a9bb84d6162e/volumes" Mar 17 18:57:08.622487 kubelet[2520]: I0317 18:57:08.622470 2520 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4" path="/var/lib/kubelet/pods/ba857b3f-fcb2-4377-bbe3-c2d24bfad0b4/volumes" Mar 17 18:57:08.773891 kubelet[2520]: E0317 18:57:08.773845 2520 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"