Mar 17 18:48:17.010270 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 18:48:17.010288 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Mar 17 17:11:44 -00 2025 Mar 17 18:48:17.010295 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 17 18:48:17.010302 kernel: printk: bootconsole [pl11] enabled Mar 17 18:48:17.010307 kernel: efi: EFI v2.70 by EDK II Mar 17 18:48:17.010313 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3763cf98 Mar 17 18:48:17.010319 kernel: random: crng init done Mar 17 18:48:17.010325 kernel: ACPI: Early table checksum verification disabled Mar 17 18:48:17.010330 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 17 18:48:17.010336 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:17.010341 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:17.010347 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 17 18:48:17.010353 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:17.010359 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:17.010366 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:17.010372 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:17.010378 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:17.010385 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:17.010391 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 17 18:48:17.010397 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:17.010402 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 17 18:48:17.010408 kernel: NUMA: Failed to initialise from firmware Mar 17 18:48:17.010414 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Mar 17 18:48:17.010420 kernel: NUMA: NODE_DATA [mem 0x1bf7f1900-0x1bf7f6fff] Mar 17 18:48:17.010425 kernel: Zone ranges: Mar 17 18:48:17.010431 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 17 18:48:17.010437 kernel: DMA32 empty Mar 17 18:48:17.010443 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 17 18:48:17.013526 kernel: Movable zone start for each node Mar 17 18:48:17.013539 kernel: Early memory node ranges Mar 17 18:48:17.013545 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 17 18:48:17.013551 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 17 18:48:17.013557 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 17 18:48:17.013563 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 17 18:48:17.013569 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 17 18:48:17.013575 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 17 18:48:17.013581 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 17 18:48:17.013587 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 17 18:48:17.013593 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 17 18:48:17.013599 kernel: psci: probing for conduit method from ACPI. Mar 17 18:48:17.013611 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 18:48:17.013617 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 18:48:17.013623 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 17 18:48:17.013629 kernel: psci: SMC Calling Convention v1.4 Mar 17 18:48:17.013635 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Mar 17 18:48:17.013643 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Mar 17 18:48:17.013649 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Mar 17 18:48:17.013655 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Mar 17 18:48:17.013661 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 18:48:17.013667 kernel: Detected PIPT I-cache on CPU0 Mar 17 18:48:17.013674 kernel: CPU features: detected: GIC system register CPU interface Mar 17 18:48:17.013680 kernel: CPU features: detected: Hardware dirty bit management Mar 17 18:48:17.013686 kernel: CPU features: detected: Spectre-BHB Mar 17 18:48:17.013692 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 18:48:17.013698 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 18:48:17.013705 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 18:48:17.013712 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 17 18:48:17.013718 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 18:48:17.013725 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 17 18:48:17.013731 kernel: Policy zone: Normal Mar 17 18:48:17.013738 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:48:17.013745 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:48:17.013752 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:48:17.013758 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:48:17.013764 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:48:17.013770 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Mar 17 18:48:17.013777 kernel: Memory: 3986936K/4194160K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 207224K reserved, 0K cma-reserved) Mar 17 18:48:17.013785 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 18:48:17.013791 kernel: trace event string verifier disabled Mar 17 18:48:17.013797 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 18:48:17.013803 kernel: rcu: RCU event tracing is enabled. Mar 17 18:48:17.013809 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 18:48:17.013816 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 18:48:17.013822 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:48:17.013828 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:48:17.013834 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 18:48:17.013840 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 18:48:17.013846 kernel: GICv3: 960 SPIs implemented Mar 17 18:48:17.013853 kernel: GICv3: 0 Extended SPIs implemented Mar 17 18:48:17.013859 kernel: GICv3: Distributor has no Range Selector support Mar 17 18:48:17.013865 kernel: Root IRQ handler: gic_handle_irq Mar 17 18:48:17.013871 kernel: GICv3: 16 PPIs implemented Mar 17 18:48:17.013877 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 17 18:48:17.013884 kernel: ITS: No ITS available, not enabling LPIs Mar 17 18:48:17.013890 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:48:17.013896 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 18:48:17.013902 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 18:48:17.013909 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 18:48:17.013915 kernel: Console: colour dummy device 80x25 Mar 17 18:48:17.013923 kernel: printk: console [tty1] enabled Mar 17 18:48:17.013929 kernel: ACPI: Core revision 20210730 Mar 17 18:48:17.013936 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 18:48:17.013942 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:48:17.013949 kernel: LSM: Security Framework initializing Mar 17 18:48:17.013955 kernel: SELinux: Initializing. Mar 17 18:48:17.013961 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:48:17.013968 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:48:17.013974 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Mar 17 18:48:17.013982 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Mar 17 18:48:17.013988 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:48:17.013994 kernel: Remapping and enabling EFI services. Mar 17 18:48:17.014001 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:48:17.014007 kernel: Detected PIPT I-cache on CPU1 Mar 17 18:48:17.014013 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 17 18:48:17.014020 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:48:17.014026 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 18:48:17.014032 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 18:48:17.014039 kernel: SMP: Total of 2 processors activated. Mar 17 18:48:17.014046 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 18:48:17.014053 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 17 18:48:17.014060 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 18:48:17.014066 kernel: CPU features: detected: CRC32 instructions Mar 17 18:48:17.014072 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 18:48:17.014079 kernel: CPU features: detected: LSE atomic instructions Mar 17 18:48:17.014085 kernel: CPU features: detected: Privileged Access Never Mar 17 18:48:17.014091 kernel: CPU: All CPU(s) started at EL1 Mar 17 18:48:17.014098 kernel: alternatives: patching kernel code Mar 17 18:48:17.014106 kernel: devtmpfs: initialized Mar 17 18:48:17.014117 kernel: KASLR enabled Mar 17 18:48:17.014124 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:48:17.014132 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 18:48:17.014138 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:48:17.014145 kernel: SMBIOS 3.1.0 present. Mar 17 18:48:17.014152 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 17 18:48:17.014159 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:48:17.014165 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 18:48:17.014174 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 18:48:17.014181 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 18:48:17.014188 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:48:17.014194 kernel: audit: type=2000 audit(0.087:1): state=initialized audit_enabled=0 res=1 Mar 17 18:48:17.014201 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:48:17.014208 kernel: cpuidle: using governor menu Mar 17 18:48:17.014214 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 18:48:17.014222 kernel: ASID allocator initialised with 32768 entries Mar 17 18:48:17.014229 kernel: ACPI: bus type PCI registered Mar 17 18:48:17.014236 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:48:17.014242 kernel: Serial: AMBA PL011 UART driver Mar 17 18:48:17.014249 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:48:17.014256 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 18:48:17.014262 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:48:17.014269 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 18:48:17.014276 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:48:17.014284 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 18:48:17.014290 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:48:17.014297 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:48:17.014304 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:48:17.014310 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:48:17.014317 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:48:17.014324 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:48:17.014330 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:48:17.014337 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:48:17.014345 kernel: ACPI: Interpreter enabled Mar 17 18:48:17.014352 kernel: ACPI: Using GIC for interrupt routing Mar 17 18:48:17.014359 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 17 18:48:17.014365 kernel: printk: console [ttyAMA0] enabled Mar 17 18:48:17.014372 kernel: printk: bootconsole [pl11] disabled Mar 17 18:48:17.014378 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 17 18:48:17.014385 kernel: iommu: Default domain type: Translated Mar 17 18:48:17.014392 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 18:48:17.014398 kernel: vgaarb: loaded Mar 17 18:48:17.014405 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:48:17.014413 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:48:17.014420 kernel: PTP clock support registered Mar 17 18:48:17.014426 kernel: Registered efivars operations Mar 17 18:48:17.014433 kernel: No ACPI PMU IRQ for CPU0 Mar 17 18:48:17.014440 kernel: No ACPI PMU IRQ for CPU1 Mar 17 18:48:17.014457 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 18:48:17.014465 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:48:17.014472 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:48:17.014480 kernel: pnp: PnP ACPI init Mar 17 18:48:17.014486 kernel: pnp: PnP ACPI: found 0 devices Mar 17 18:48:17.014493 kernel: NET: Registered PF_INET protocol family Mar 17 18:48:17.014500 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:48:17.014506 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:48:17.014513 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:48:17.014520 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:48:17.014527 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 18:48:17.014534 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:48:17.014543 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:48:17.014549 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:48:17.014556 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:48:17.014563 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:48:17.014570 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Mar 17 18:48:17.014576 kernel: kvm [1]: HYP mode not available Mar 17 18:48:17.014583 kernel: Initialise system trusted keyrings Mar 17 18:48:17.014589 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:48:17.014596 kernel: Key type asymmetric registered Mar 17 18:48:17.014603 kernel: Asymmetric key parser 'x509' registered Mar 17 18:48:17.014610 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:48:17.014617 kernel: io scheduler mq-deadline registered Mar 17 18:48:17.014624 kernel: io scheduler kyber registered Mar 17 18:48:17.014630 kernel: io scheduler bfq registered Mar 17 18:48:17.014637 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:48:17.014644 kernel: thunder_xcv, ver 1.0 Mar 17 18:48:17.014650 kernel: thunder_bgx, ver 1.0 Mar 17 18:48:17.014657 kernel: nicpf, ver 1.0 Mar 17 18:48:17.014663 kernel: nicvf, ver 1.0 Mar 17 18:48:17.014795 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 18:48:17.014856 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T18:48:16 UTC (1742237296) Mar 17 18:48:17.014866 kernel: efifb: probing for efifb Mar 17 18:48:17.014873 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 17 18:48:17.014880 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 17 18:48:17.014886 kernel: efifb: scrolling: redraw Mar 17 18:48:17.014893 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 17 18:48:17.014902 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 18:48:17.014909 kernel: fb0: EFI VGA frame buffer device Mar 17 18:48:17.014915 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 17 18:48:17.014922 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 18:48:17.014928 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:48:17.014935 kernel: Segment Routing with IPv6 Mar 17 18:48:17.014941 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:48:17.014948 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:48:17.014955 kernel: Key type dns_resolver registered Mar 17 18:48:17.014961 kernel: registered taskstats version 1 Mar 17 18:48:17.014969 kernel: Loading compiled-in X.509 certificates Mar 17 18:48:17.014976 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: c6f3fb83dc6bb7052b07ec5b1ef41d12f9b3f7e4' Mar 17 18:48:17.014983 kernel: Key type .fscrypt registered Mar 17 18:48:17.014990 kernel: Key type fscrypt-provisioning registered Mar 17 18:48:17.014996 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:48:17.015003 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:48:17.015009 kernel: ima: No architecture policies found Mar 17 18:48:17.015016 kernel: clk: Disabling unused clocks Mar 17 18:48:17.015024 kernel: Freeing unused kernel memory: 36416K Mar 17 18:48:17.015031 kernel: Run /init as init process Mar 17 18:48:17.015037 kernel: with arguments: Mar 17 18:48:17.015044 kernel: /init Mar 17 18:48:17.015050 kernel: with environment: Mar 17 18:48:17.015057 kernel: HOME=/ Mar 17 18:48:17.015063 kernel: TERM=linux Mar 17 18:48:17.015069 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:48:17.015078 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:48:17.015089 systemd[1]: Detected virtualization microsoft. Mar 17 18:48:17.015096 systemd[1]: Detected architecture arm64. Mar 17 18:48:17.015103 systemd[1]: Running in initrd. Mar 17 18:48:17.015110 systemd[1]: No hostname configured, using default hostname. Mar 17 18:48:17.015117 systemd[1]: Hostname set to . Mar 17 18:48:17.015124 systemd[1]: Initializing machine ID from random generator. Mar 17 18:48:17.015132 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:48:17.015140 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:48:17.015147 systemd[1]: Reached target cryptsetup.target. Mar 17 18:48:17.015154 systemd[1]: Reached target paths.target. Mar 17 18:48:17.015161 systemd[1]: Reached target slices.target. Mar 17 18:48:17.015168 systemd[1]: Reached target swap.target. Mar 17 18:48:17.015175 systemd[1]: Reached target timers.target. Mar 17 18:48:17.015182 systemd[1]: Listening on iscsid.socket. Mar 17 18:48:17.015190 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:48:17.015198 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:48:17.015206 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:48:17.015213 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:48:17.015220 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:48:17.015227 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:48:17.015234 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:48:17.015241 systemd[1]: Reached target sockets.target. Mar 17 18:48:17.015248 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:48:17.015255 systemd[1]: Finished network-cleanup.service. Mar 17 18:48:17.015264 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:48:17.015271 systemd[1]: Starting systemd-journald.service... Mar 17 18:48:17.015278 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:48:17.015285 systemd[1]: Starting systemd-resolved.service... Mar 17 18:48:17.015296 systemd-journald[276]: Journal started Mar 17 18:48:17.015336 systemd-journald[276]: Runtime Journal (/run/log/journal/313af7fa542b4e60bf380f9b2bec3fcd) is 8.0M, max 78.5M, 70.5M free. Mar 17 18:48:17.005995 systemd-modules-load[277]: Inserted module 'overlay' Mar 17 18:48:17.047742 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:48:17.047765 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:48:17.039934 systemd-resolved[278]: Positive Trust Anchors: Mar 17 18:48:17.039942 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:48:17.091095 kernel: Bridge firewalling registered Mar 17 18:48:17.091117 systemd[1]: Started systemd-journald.service. Mar 17 18:48:17.091137 kernel: SCSI subsystem initialized Mar 17 18:48:17.091145 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:48:17.091155 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:48:17.039969 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:48:17.130652 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:48:17.042062 systemd-resolved[278]: Defaulting to hostname 'linux'. Mar 17 18:48:17.055534 systemd-modules-load[277]: Inserted module 'br_netfilter' Mar 17 18:48:17.179776 kernel: audit: type=1130 audit(1742237297.141:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.179804 kernel: audit: type=1130 audit(1742237297.163:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.142215 systemd-modules-load[277]: Inserted module 'dm_multipath' Mar 17 18:48:17.206175 kernel: audit: type=1130 audit(1742237297.185:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.142549 systemd[1]: Started systemd-resolved.service. Mar 17 18:48:17.163848 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:48:17.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.186135 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:48:17.243192 kernel: audit: type=1130 audit(1742237297.210:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.210907 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:48:17.239008 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:48:17.276287 kernel: audit: type=1130 audit(1742237297.218:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.248073 systemd[1]: Reached target nss-lookup.target. Mar 17 18:48:17.301172 kernel: audit: type=1130 audit(1742237297.247:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.271269 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:48:17.297964 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:48:17.306610 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:48:17.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.320883 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:48:17.354628 kernel: audit: type=1130 audit(1742237297.328:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.349794 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:48:17.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.379627 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:48:17.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.387831 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:48:17.414767 kernel: audit: type=1130 audit(1742237297.358:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.414798 kernel: audit: type=1130 audit(1742237297.383:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.416474 dracut-cmdline[299]: dracut-dracut-053 Mar 17 18:48:17.421005 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:48:17.511476 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:48:17.526473 kernel: iscsi: registered transport (tcp) Mar 17 18:48:17.546711 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:48:17.546731 kernel: QLogic iSCSI HBA Driver Mar 17 18:48:17.581581 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:48:17.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.587240 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:48:17.637478 kernel: raid6: neonx8 gen() 13791 MB/s Mar 17 18:48:17.658462 kernel: raid6: neonx8 xor() 10836 MB/s Mar 17 18:48:17.678463 kernel: raid6: neonx4 gen() 13536 MB/s Mar 17 18:48:17.699470 kernel: raid6: neonx4 xor() 11312 MB/s Mar 17 18:48:17.719468 kernel: raid6: neonx2 gen() 13088 MB/s Mar 17 18:48:17.739467 kernel: raid6: neonx2 xor() 10494 MB/s Mar 17 18:48:17.760466 kernel: raid6: neonx1 gen() 10570 MB/s Mar 17 18:48:17.781485 kernel: raid6: neonx1 xor() 8750 MB/s Mar 17 18:48:17.801460 kernel: raid6: int64x8 gen() 6275 MB/s Mar 17 18:48:17.822459 kernel: raid6: int64x8 xor() 3544 MB/s Mar 17 18:48:17.842457 kernel: raid6: int64x4 gen() 7211 MB/s Mar 17 18:48:17.862463 kernel: raid6: int64x4 xor() 3859 MB/s Mar 17 18:48:17.883458 kernel: raid6: int64x2 gen() 6152 MB/s Mar 17 18:48:17.903461 kernel: raid6: int64x2 xor() 3324 MB/s Mar 17 18:48:17.923456 kernel: raid6: int64x1 gen() 5047 MB/s Mar 17 18:48:17.949675 kernel: raid6: int64x1 xor() 2643 MB/s Mar 17 18:48:17.949701 kernel: raid6: using algorithm neonx8 gen() 13791 MB/s Mar 17 18:48:17.949717 kernel: raid6: .... xor() 10836 MB/s, rmw enabled Mar 17 18:48:17.953727 kernel: raid6: using neon recovery algorithm Mar 17 18:48:17.973995 kernel: xor: measuring software checksum speed Mar 17 18:48:17.974007 kernel: 8regs : 17191 MB/sec Mar 17 18:48:17.977696 kernel: 32regs : 20712 MB/sec Mar 17 18:48:17.981401 kernel: arm64_neon : 27691 MB/sec Mar 17 18:48:17.981411 kernel: xor: using function: arm64_neon (27691 MB/sec) Mar 17 18:48:18.041463 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Mar 17 18:48:18.050689 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:48:18.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.058000 audit: BPF prog-id=7 op=LOAD Mar 17 18:48:18.058000 audit: BPF prog-id=8 op=LOAD Mar 17 18:48:18.059122 systemd[1]: Starting systemd-udevd.service... Mar 17 18:48:18.076869 systemd-udevd[475]: Using default interface naming scheme 'v252'. Mar 17 18:48:18.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.083517 systemd[1]: Started systemd-udevd.service. Mar 17 18:48:18.093067 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:48:18.106538 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Mar 17 18:48:18.136069 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:48:18.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.141429 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:48:18.179632 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:48:18.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:18.234502 kernel: hv_vmbus: Vmbus version:5.3 Mar 17 18:48:18.247469 kernel: hv_vmbus: registering driver hid_hyperv Mar 17 18:48:18.247515 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 17 18:48:18.269195 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 17 18:48:18.269247 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 17 18:48:18.277472 kernel: hv_vmbus: registering driver hv_netvsc Mar 17 18:48:18.277513 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 17 18:48:18.290474 kernel: hv_vmbus: registering driver hv_storvsc Mar 17 18:48:18.297915 kernel: scsi host0: storvsc_host_t Mar 17 18:48:18.297994 kernel: scsi host1: storvsc_host_t Mar 17 18:48:18.304807 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 17 18:48:18.311620 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Mar 17 18:48:18.331413 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 17 18:48:18.347003 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 18:48:18.347018 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 17 18:48:18.363659 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 17 18:48:18.363767 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 17 18:48:18.363845 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 17 18:48:18.363922 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 17 18:48:18.364004 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 17 18:48:18.364087 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:48:18.364097 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 17 18:48:18.426048 kernel: hv_netvsc 000d3a07-3780-000d-3a07-3780000d3a07 eth0: VF slot 1 added Mar 17 18:48:18.435474 kernel: hv_vmbus: registering driver hv_pci Mar 17 18:48:18.444467 kernel: hv_pci 941fe16f-44e9-4278-9c95-a42b4ca0c60d: PCI VMBus probing: Using version 0x10004 Mar 17 18:48:18.734578 kernel: hv_pci 941fe16f-44e9-4278-9c95-a42b4ca0c60d: PCI host bridge to bus 44e9:00 Mar 17 18:48:18.734692 kernel: pci_bus 44e9:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 17 18:48:18.734791 kernel: pci_bus 44e9:00: No busn resource found for root bus, will use [bus 00-ff] Mar 17 18:48:18.734863 kernel: pci 44e9:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 17 18:48:18.734953 kernel: pci 44e9:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 17 18:48:18.735030 kernel: pci 44e9:00:02.0: enabling Extended Tags Mar 17 18:48:18.735106 kernel: pci 44e9:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 44e9:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 17 18:48:18.735186 kernel: pci_bus 44e9:00: busn_res: [bus 00-ff] end is updated to 00 Mar 17 18:48:18.735259 kernel: pci 44e9:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 17 18:48:18.772477 kernel: mlx5_core 44e9:00:02.0: firmware version: 16.31.2424 Mar 17 18:48:19.083335 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (538) Mar 17 18:48:19.083353 kernel: mlx5_core 44e9:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Mar 17 18:48:19.083488 kernel: hv_netvsc 000d3a07-3780-000d-3a07-3780000d3a07 eth0: VF registering: eth1 Mar 17 18:48:19.083580 kernel: mlx5_core 44e9:00:02.0 eth1: joined to eth0 Mar 17 18:48:18.913965 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:48:18.958276 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:48:19.103869 kernel: mlx5_core 44e9:00:02.0 enP17641s1: renamed from eth1 Mar 17 18:48:19.107528 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:48:19.113710 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:48:19.123527 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:48:19.140652 systemd[1]: Starting disk-uuid.service... Mar 17 18:48:19.167474 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:48:19.180466 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:48:20.189314 disk-uuid[606]: The operation has completed successfully. Mar 17 18:48:20.194305 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:48:20.250219 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:48:20.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:20.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:20.250322 systemd[1]: Finished disk-uuid.service. Mar 17 18:48:20.255187 systemd[1]: Starting verity-setup.service... Mar 17 18:48:20.300512 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 18:48:20.456131 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:48:20.462009 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:48:20.472713 systemd[1]: Finished verity-setup.service. Mar 17 18:48:20.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:20.528187 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:48:20.535301 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:48:20.532102 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:48:20.532823 systemd[1]: Starting ignition-setup.service... Mar 17 18:48:20.539442 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:48:20.574187 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:48:20.574226 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:48:20.578762 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:48:20.627347 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:48:20.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:20.636000 audit: BPF prog-id=9 op=LOAD Mar 17 18:48:20.637735 systemd[1]: Starting systemd-networkd.service... Mar 17 18:48:20.648516 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:48:20.665227 systemd-networkd[847]: lo: Link UP Mar 17 18:48:20.665239 systemd-networkd[847]: lo: Gained carrier Mar 17 18:48:20.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:20.665649 systemd-networkd[847]: Enumeration completed Mar 17 18:48:20.666270 systemd-networkd[847]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:48:20.668887 systemd[1]: Started systemd-networkd.service. Mar 17 18:48:20.675633 systemd[1]: Reached target network.target. Mar 17 18:48:20.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:20.684827 systemd[1]: Starting iscsiuio.service... Mar 17 18:48:20.711502 iscsid[856]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:48:20.711502 iscsid[856]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Mar 17 18:48:20.711502 iscsid[856]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:48:20.711502 iscsid[856]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:48:20.711502 iscsid[856]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:48:20.711502 iscsid[856]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:48:20.711502 iscsid[856]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:48:20.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:20.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:20.695772 systemd[1]: Started iscsiuio.service. Mar 17 18:48:20.706835 systemd[1]: Starting iscsid.service... Mar 17 18:48:20.714998 systemd[1]: Started iscsid.service. Mar 17 18:48:20.743192 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:48:20.776620 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:48:20.785913 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:48:20.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:20.796844 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:48:20.804794 systemd[1]: Reached target remote-fs.target. Mar 17 18:48:20.815540 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:48:20.830425 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:48:20.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:20.851050 systemd[1]: Finished ignition-setup.service. Mar 17 18:48:20.857078 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:48:20.879397 kernel: mlx5_core 44e9:00:02.0 enP17641s1: Link up Mar 17 18:48:20.959476 kernel: hv_netvsc 000d3a07-3780-000d-3a07-3780000d3a07 eth0: Data path switched to VF: enP17641s1 Mar 17 18:48:20.964672 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:48:20.964938 systemd-networkd[847]: enP17641s1: Link UP Mar 17 18:48:20.965124 systemd-networkd[847]: eth0: Link UP Mar 17 18:48:20.965499 systemd-networkd[847]: eth0: Gained carrier Mar 17 18:48:20.978864 systemd-networkd[847]: enP17641s1: Gained carrier Mar 17 18:48:20.993510 systemd-networkd[847]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 18:48:22.024615 systemd-networkd[847]: eth0: Gained IPv6LL Mar 17 18:48:23.114310 ignition[871]: Ignition 2.14.0 Mar 17 18:48:23.114321 ignition[871]: Stage: fetch-offline Mar 17 18:48:23.114374 ignition[871]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:23.114398 ignition[871]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:23.188511 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:23.188674 ignition[871]: parsed url from cmdline: "" Mar 17 18:48:23.188678 ignition[871]: no config URL provided Mar 17 18:48:23.188683 ignition[871]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:48:23.238645 kernel: kauditd_printk_skb: 18 callbacks suppressed Mar 17 18:48:23.238668 kernel: audit: type=1130 audit(1742237303.203:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:23.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:23.195413 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:48:23.188691 ignition[871]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:48:23.204944 systemd[1]: Starting ignition-fetch.service... Mar 17 18:48:23.188696 ignition[871]: failed to fetch config: resource requires networking Mar 17 18:48:23.188802 ignition[871]: Ignition finished successfully Mar 17 18:48:23.244438 ignition[877]: Ignition 2.14.0 Mar 17 18:48:23.244445 ignition[877]: Stage: fetch Mar 17 18:48:23.244595 ignition[877]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:23.244615 ignition[877]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:23.249228 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:23.251270 ignition[877]: parsed url from cmdline: "" Mar 17 18:48:23.251279 ignition[877]: no config URL provided Mar 17 18:48:23.251285 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:48:23.251298 ignition[877]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:48:23.251333 ignition[877]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 17 18:48:23.343598 ignition[877]: GET result: OK Mar 17 18:48:23.343697 ignition[877]: config has been read from IMDS userdata Mar 17 18:48:23.343750 ignition[877]: parsing config with SHA512: b258a2d0894f7eabc05fae151fa9744bce52ac455b845c1bbe8070a98d9550ee45affbdddd13b00643d22830b5531277f472d1ca525e873ca7d2bcca6567c6a1 Mar 17 18:48:23.347466 unknown[877]: fetched base config from "system" Mar 17 18:48:23.377212 kernel: audit: type=1130 audit(1742237303.355:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:23.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:23.348021 ignition[877]: fetch: fetch complete Mar 17 18:48:23.347474 unknown[877]: fetched base config from "system" Mar 17 18:48:23.348026 ignition[877]: fetch: fetch passed Mar 17 18:48:23.347479 unknown[877]: fetched user config from "azure" Mar 17 18:48:23.348074 ignition[877]: Ignition finished successfully Mar 17 18:48:23.420348 kernel: audit: type=1130 audit(1742237303.398:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:23.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:23.351367 systemd[1]: Finished ignition-fetch.service. Mar 17 18:48:23.384343 ignition[883]: Ignition 2.14.0 Mar 17 18:48:23.356646 systemd[1]: Starting ignition-kargs.service... Mar 17 18:48:23.384349 ignition[883]: Stage: kargs Mar 17 18:48:23.394139 systemd[1]: Finished ignition-kargs.service. Mar 17 18:48:23.467443 kernel: audit: type=1130 audit(1742237303.439:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:23.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:23.384466 ignition[883]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:23.399373 systemd[1]: Starting ignition-disks.service... Mar 17 18:48:23.384484 ignition[883]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:23.434883 systemd[1]: Finished ignition-disks.service. Mar 17 18:48:23.387179 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:23.439584 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:48:23.390392 ignition[883]: kargs: kargs passed Mar 17 18:48:23.463562 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:48:23.390458 ignition[883]: Ignition finished successfully Mar 17 18:48:23.471780 systemd[1]: Reached target local-fs.target. Mar 17 18:48:23.409353 ignition[889]: Ignition 2.14.0 Mar 17 18:48:23.482562 systemd[1]: Reached target sysinit.target. Mar 17 18:48:23.409360 ignition[889]: Stage: disks Mar 17 18:48:23.490476 systemd[1]: Reached target basic.target. Mar 17 18:48:23.409485 ignition[889]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:23.504785 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:48:23.409511 ignition[889]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:23.425557 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:23.426633 ignition[889]: disks: disks passed Mar 17 18:48:23.426678 ignition[889]: Ignition finished successfully Mar 17 18:48:23.598551 systemd-fsck[897]: ROOT: clean, 623/7326000 files, 481077/7359488 blocks Mar 17 18:48:23.607698 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:48:23.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:23.634311 systemd[1]: Mounting sysroot.mount... Mar 17 18:48:23.642820 kernel: audit: type=1130 audit(1742237303.611:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:23.655465 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:48:23.656338 systemd[1]: Mounted sysroot.mount. Mar 17 18:48:23.660018 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:48:23.715579 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:48:23.720087 systemd[1]: Starting flatcar-metadata-hostname.service... Mar 17 18:48:23.732377 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:48:23.732416 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:48:23.747501 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:48:23.800026 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:48:23.805146 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:48:23.827656 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (907) Mar 17 18:48:23.834176 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:48:23.848408 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:48:23.848430 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:48:23.848439 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:48:23.853661 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:48:23.869839 initrd-setup-root[938]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:48:23.892470 initrd-setup-root[946]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:48:23.902192 initrd-setup-root[954]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:48:24.302974 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:48:24.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.326785 systemd[1]: Starting ignition-mount.service... Mar 17 18:48:24.336294 kernel: audit: type=1130 audit(1742237304.307:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.335725 systemd[1]: Starting sysroot-boot.service... Mar 17 18:48:24.345027 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Mar 17 18:48:24.345559 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Mar 17 18:48:24.367048 systemd[1]: Finished sysroot-boot.service. Mar 17 18:48:24.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.388881 ignition[976]: INFO : Ignition 2.14.0 Mar 17 18:48:24.388881 ignition[976]: INFO : Stage: mount Mar 17 18:48:24.388881 ignition[976]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:24.388881 ignition[976]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:24.437875 kernel: audit: type=1130 audit(1742237304.370:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.437898 kernel: audit: type=1130 audit(1742237304.410:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.405659 systemd[1]: Finished ignition-mount.service. Mar 17 18:48:24.441926 ignition[976]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:24.441926 ignition[976]: INFO : mount: mount passed Mar 17 18:48:24.441926 ignition[976]: INFO : Ignition finished successfully Mar 17 18:48:24.775700 coreos-metadata[906]: Mar 17 18:48:24.775 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 17 18:48:24.785435 coreos-metadata[906]: Mar 17 18:48:24.785 INFO Fetch successful Mar 17 18:48:24.818686 coreos-metadata[906]: Mar 17 18:48:24.818 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 17 18:48:24.831153 coreos-metadata[906]: Mar 17 18:48:24.831 INFO Fetch successful Mar 17 18:48:24.845489 coreos-metadata[906]: Mar 17 18:48:24.845 INFO wrote hostname ci-3510.3.7-a-ffee15dd16 to /sysroot/etc/hostname Mar 17 18:48:24.854196 systemd[1]: Finished flatcar-metadata-hostname.service. Mar 17 18:48:24.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.880479 kernel: audit: type=1130 audit(1742237304.858:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.859819 systemd[1]: Starting ignition-files.service... Mar 17 18:48:24.884841 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:48:24.901466 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (985) Mar 17 18:48:24.912360 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:48:24.912378 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:48:24.916765 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:48:24.921607 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:48:24.934899 ignition[1004]: INFO : Ignition 2.14.0 Mar 17 18:48:24.934899 ignition[1004]: INFO : Stage: files Mar 17 18:48:24.945397 ignition[1004]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:24.945397 ignition[1004]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:24.945397 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:24.945397 ignition[1004]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:48:24.945397 ignition[1004]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:48:24.945397 ignition[1004]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:48:25.003503 ignition[1004]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:48:25.010801 ignition[1004]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:48:25.018790 ignition[1004]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:48:25.018790 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 18:48:25.018790 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 18:48:25.017728 unknown[1004]: wrote ssh authorized keys file for user: core Mar 17 18:48:25.086206 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 18:48:25.237262 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 18:48:25.248336 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:48:25.248336 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 18:48:25.556105 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 18:48:25.627882 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:48:25.638712 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:48:25.638712 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:48:25.638712 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:48:25.638712 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:48:25.638712 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:48:25.638712 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:48:25.638712 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:48:25.638712 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:48:25.638712 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:48:25.638712 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:48:25.638712 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:48:25.638712 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:48:25.638712 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Mar 17 18:48:25.638712 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:48:25.638712 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2947163351" Mar 17 18:48:25.800285 ignition[1004]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2947163351": device or resource busy Mar 17 18:48:25.800285 ignition[1004]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2947163351", trying btrfs: device or resource busy Mar 17 18:48:25.800285 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2947163351" Mar 17 18:48:25.800285 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2947163351" Mar 17 18:48:25.800285 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem2947163351" Mar 17 18:48:25.800285 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem2947163351" Mar 17 18:48:25.800285 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Mar 17 18:48:25.800285 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Mar 17 18:48:25.800285 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:48:25.800285 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3432361548" Mar 17 18:48:25.800285 ignition[1004]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3432361548": device or resource busy Mar 17 18:48:25.800285 ignition[1004]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3432361548", trying btrfs: device or resource busy Mar 17 18:48:25.800285 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3432361548" Mar 17 18:48:25.800285 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3432361548" Mar 17 18:48:25.642048 systemd[1]: mnt-oem2947163351.mount: Deactivated successfully. Mar 17 18:48:25.965354 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem3432361548" Mar 17 18:48:25.965354 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem3432361548" Mar 17 18:48:25.965354 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Mar 17 18:48:25.965354 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:48:25.965354 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 17 18:48:25.965354 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Mar 17 18:48:25.694535 systemd[1]: mnt-oem3432361548.mount: Deactivated successfully. Mar 17 18:48:26.188082 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:48:26.188082 ignition[1004]: INFO : files: op(14): [started] processing unit "waagent.service" Mar 17 18:48:26.207438 ignition[1004]: INFO : files: op(14): [finished] processing unit "waagent.service" Mar 17 18:48:26.207438 ignition[1004]: INFO : files: op(15): [started] processing unit "nvidia.service" Mar 17 18:48:26.207438 ignition[1004]: INFO : files: op(15): [finished] processing unit "nvidia.service" Mar 17 18:48:26.207438 ignition[1004]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Mar 17 18:48:26.207438 ignition[1004]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:48:26.207438 ignition[1004]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:48:26.207438 ignition[1004]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Mar 17 18:48:26.207438 ignition[1004]: INFO : files: op(18): [started] setting preset to enabled for "waagent.service" Mar 17 18:48:26.207438 ignition[1004]: INFO : files: op(18): [finished] setting preset to enabled for "waagent.service" Mar 17 18:48:26.207438 ignition[1004]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Mar 17 18:48:26.207438 ignition[1004]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Mar 17 18:48:26.207438 ignition[1004]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:48:26.207438 ignition[1004]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:48:26.207438 ignition[1004]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:48:26.207438 ignition[1004]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:48:26.207438 ignition[1004]: INFO : files: files passed Mar 17 18:48:26.207438 ignition[1004]: INFO : Ignition finished successfully Mar 17 18:48:26.405947 kernel: audit: type=1130 audit(1742237306.211:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.201249 systemd[1]: Finished ignition-files.service. Mar 17 18:48:26.236704 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:48:26.248492 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:48:26.435791 initrd-setup-root-after-ignition[1029]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:48:26.254887 systemd[1]: Starting ignition-quench.service... Mar 17 18:48:26.267200 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:48:26.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.267292 systemd[1]: Finished ignition-quench.service. Mar 17 18:48:26.317783 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:48:26.329674 systemd[1]: Reached target ignition-complete.target. Mar 17 18:48:26.344037 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:48:26.365506 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:48:26.365606 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:48:26.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.379022 systemd[1]: Reached target initrd-fs.target. Mar 17 18:48:26.392937 systemd[1]: Reached target initrd.target. Mar 17 18:48:26.401123 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:48:26.402031 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:48:26.450218 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:48:26.455973 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:48:26.475314 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:48:26.483266 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:48:26.492973 systemd[1]: Stopped target timers.target. Mar 17 18:48:26.502729 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:48:26.502833 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:48:26.513311 systemd[1]: Stopped target initrd.target. Mar 17 18:48:26.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.521654 systemd[1]: Stopped target basic.target. Mar 17 18:48:26.529584 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:48:26.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.538320 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:48:26.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.550502 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:48:26.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.559356 systemd[1]: Stopped target remote-fs.target. Mar 17 18:48:26.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.567394 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:48:26.575711 systemd[1]: Stopped target sysinit.target. Mar 17 18:48:26.681017 iscsid[856]: iscsid shutting down. Mar 17 18:48:26.583800 systemd[1]: Stopped target local-fs.target. Mar 17 18:48:26.694362 ignition[1042]: INFO : Ignition 2.14.0 Mar 17 18:48:26.694362 ignition[1042]: INFO : Stage: umount Mar 17 18:48:26.694362 ignition[1042]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:26.694362 ignition[1042]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:26.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.592343 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:48:26.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.749754 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:26.749754 ignition[1042]: INFO : umount: umount passed Mar 17 18:48:26.749754 ignition[1042]: INFO : Ignition finished successfully Mar 17 18:48:26.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.600550 systemd[1]: Stopped target swap.target. Mar 17 18:48:26.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.609858 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:48:26.609976 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:48:26.618334 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:48:26.626693 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:48:26.626799 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:48:26.636170 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:48:26.636266 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:48:26.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.646317 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:48:26.646400 systemd[1]: Stopped ignition-files.service. Mar 17 18:48:26.654604 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 18:48:26.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.654692 systemd[1]: Stopped flatcar-metadata-hostname.service. Mar 17 18:48:26.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.664966 systemd[1]: Stopping ignition-mount.service... Mar 17 18:48:26.674255 systemd[1]: Stopping iscsid.service... Mar 17 18:48:26.690107 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:48:26.699524 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:48:26.699725 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:48:26.704772 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:48:26.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.704916 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:48:26.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.719410 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:48:26.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.942000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:48:26.719533 systemd[1]: Stopped iscsid.service. Mar 17 18:48:26.729262 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:48:26.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.729344 systemd[1]: Stopped ignition-mount.service. Mar 17 18:48:26.747078 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:48:26.749117 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:48:26.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.749179 systemd[1]: Stopped ignition-disks.service. Mar 17 18:48:26.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.754276 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:48:26.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.754324 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:48:26.765781 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 18:48:26.765824 systemd[1]: Stopped ignition-fetch.service. Mar 17 18:48:27.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.774373 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:48:26.774412 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:48:26.784324 systemd[1]: Stopped target paths.target. Mar 17 18:48:27.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.788639 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:48:27.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.798045 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:48:27.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.806588 systemd[1]: Stopped target slices.target. Mar 17 18:48:26.814796 systemd[1]: Stopped target sockets.target. Mar 17 18:48:26.824231 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:48:26.824288 systemd[1]: Closed iscsid.socket. Mar 17 18:48:27.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.831844 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:48:27.118254 kernel: hv_netvsc 000d3a07-3780-000d-3a07-3780000d3a07 eth0: Data path switched from VF: enP17641s1 Mar 17 18:48:27.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.831892 systemd[1]: Stopped ignition-setup.service. Mar 17 18:48:27.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.842206 systemd[1]: Stopping iscsiuio.service... Mar 17 18:48:27.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:27.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.856572 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:48:26.856666 systemd[1]: Stopped iscsiuio.service. Mar 17 18:48:26.864018 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:48:26.864102 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:48:26.873485 systemd[1]: Stopped target network.target. Mar 17 18:48:26.883177 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:48:26.883214 systemd[1]: Closed iscsiuio.socket. Mar 17 18:48:26.896751 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:48:26.904175 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:48:26.913199 systemd-networkd[847]: eth0: DHCPv6 lease lost Mar 17 18:48:27.173000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:48:26.914677 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:48:26.914774 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:48:26.924206 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:48:26.924301 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:48:26.934151 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:48:26.934232 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:48:26.942802 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:48:26.942865 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:48:26.950981 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:48:26.951029 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:48:27.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:26.959538 systemd[1]: Stopping network-cleanup.service... Mar 17 18:48:26.971219 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:48:26.971298 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:48:26.980408 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:48:26.980495 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:48:26.993357 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:48:26.993398 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:48:26.998071 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:48:27.013663 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:48:27.287459 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Mar 17 18:48:27.023156 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:48:27.023291 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:48:27.027689 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:48:27.027737 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:48:27.035784 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:48:27.035820 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:48:27.045315 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:48:27.045359 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:48:27.053754 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:48:27.053791 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:48:27.061218 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:48:27.061253 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:48:27.075626 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:48:27.088193 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 18:48:27.088261 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Mar 17 18:48:27.100430 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:48:27.100606 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:48:27.113270 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:48:27.113336 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:48:27.123947 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 18:48:27.124487 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:48:27.124567 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:48:27.216133 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:48:27.216250 systemd[1]: Stopped network-cleanup.service. Mar 17 18:48:27.225379 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:48:27.234487 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:48:27.250359 systemd[1]: Switching root. Mar 17 18:48:27.292909 systemd-journald[276]: Journal stopped Mar 17 18:48:37.353687 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:48:37.353707 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:48:37.353718 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:48:37.353728 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:48:37.353736 kernel: SELinux: policy capability open_perms=1 Mar 17 18:48:37.353744 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:48:37.353753 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:48:37.353762 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:48:37.353771 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:48:37.353778 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:48:37.353786 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:48:37.353796 kernel: kauditd_printk_skb: 43 callbacks suppressed Mar 17 18:48:37.353805 kernel: audit: type=1403 audit(1742237309.222:82): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:48:37.353815 systemd[1]: Successfully loaded SELinux policy in 246.077ms. Mar 17 18:48:37.353825 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.245ms. Mar 17 18:48:37.353837 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:48:37.353846 systemd[1]: Detected virtualization microsoft. Mar 17 18:48:37.353855 systemd[1]: Detected architecture arm64. Mar 17 18:48:37.353864 systemd[1]: Detected first boot. Mar 17 18:48:37.353873 systemd[1]: Hostname set to . Mar 17 18:48:37.353882 systemd[1]: Initializing machine ID from random generator. Mar 17 18:48:37.353891 kernel: audit: type=1400 audit(1742237309.881:83): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:48:37.353902 kernel: audit: type=1400 audit(1742237309.885:84): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:48:37.353910 kernel: audit: type=1334 audit(1742237309.897:85): prog-id=10 op=LOAD Mar 17 18:48:37.353919 kernel: audit: type=1334 audit(1742237309.897:86): prog-id=10 op=UNLOAD Mar 17 18:48:37.353927 kernel: audit: type=1334 audit(1742237309.915:87): prog-id=11 op=LOAD Mar 17 18:48:37.353936 kernel: audit: type=1334 audit(1742237309.915:88): prog-id=11 op=UNLOAD Mar 17 18:48:37.353945 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:48:37.353955 kernel: audit: type=1400 audit(1742237311.002:89): avc: denied { associate } for pid=1075 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:48:37.353966 kernel: audit: type=1300 audit(1742237311.002:89): arch=c00000b7 syscall=5 success=yes exit=0 a0=40000227f2 a1=4000028ac8 a2=4000026d00 a3=32 items=0 ppid=1058 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:48:37.353976 kernel: audit: type=1327 audit(1742237311.002:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:48:37.353985 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:48:37.353994 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:48:37.354004 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:48:37.354015 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:48:37.354025 kernel: kauditd_printk_skb: 6 callbacks suppressed Mar 17 18:48:37.354033 kernel: audit: type=1334 audit(1742237316.626:91): prog-id=12 op=LOAD Mar 17 18:48:37.354042 kernel: audit: type=1334 audit(1742237316.626:92): prog-id=3 op=UNLOAD Mar 17 18:48:37.354050 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 18:48:37.354060 kernel: audit: type=1334 audit(1742237316.632:93): prog-id=13 op=LOAD Mar 17 18:48:37.354071 systemd[1]: Stopped initrd-switch-root.service. Mar 17 18:48:37.354081 kernel: audit: type=1334 audit(1742237316.637:94): prog-id=14 op=LOAD Mar 17 18:48:37.354089 kernel: audit: type=1334 audit(1742237316.637:95): prog-id=4 op=UNLOAD Mar 17 18:48:37.354099 kernel: audit: type=1334 audit(1742237316.637:96): prog-id=5 op=UNLOAD Mar 17 18:48:37.354109 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 18:48:37.354118 kernel: audit: type=1131 audit(1742237316.638:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.354128 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:48:37.354137 kernel: audit: type=1334 audit(1742237316.660:98): prog-id=12 op=UNLOAD Mar 17 18:48:37.354146 kernel: audit: type=1130 audit(1742237316.679:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.354156 kernel: audit: type=1131 audit(1742237316.679:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.354167 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:48:37.354176 systemd[1]: Created slice system-getty.slice. Mar 17 18:48:37.354185 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:48:37.354195 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:48:37.354205 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:48:37.354214 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:48:37.354223 systemd[1]: Created slice user.slice. Mar 17 18:48:37.354233 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:48:37.354242 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:48:37.354252 systemd[1]: Set up automount boot.automount. Mar 17 18:48:37.354261 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:48:37.354270 systemd[1]: Stopped target initrd-switch-root.target. Mar 17 18:48:37.354279 systemd[1]: Stopped target initrd-fs.target. Mar 17 18:48:37.354288 systemd[1]: Stopped target initrd-root-fs.target. Mar 17 18:48:37.354297 systemd[1]: Reached target integritysetup.target. Mar 17 18:48:37.354307 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:48:37.354316 systemd[1]: Reached target remote-fs.target. Mar 17 18:48:37.354327 systemd[1]: Reached target slices.target. Mar 17 18:48:37.354336 systemd[1]: Reached target swap.target. Mar 17 18:48:37.354345 systemd[1]: Reached target torcx.target. Mar 17 18:48:37.354354 systemd[1]: Reached target veritysetup.target. Mar 17 18:48:37.354363 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:48:37.354373 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:48:37.354382 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:48:37.354393 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:48:37.354403 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:48:37.354412 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:48:37.354421 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:48:37.354430 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:48:37.354440 systemd[1]: Mounting media.mount... Mar 17 18:48:37.354465 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:48:37.354476 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:48:37.354485 systemd[1]: Mounting tmp.mount... Mar 17 18:48:37.354495 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:48:37.354504 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:48:37.354514 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:48:37.354523 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:48:37.354532 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:48:37.354541 systemd[1]: Starting modprobe@drm.service... Mar 17 18:48:37.354552 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:48:37.354562 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:48:37.354571 systemd[1]: Starting modprobe@loop.service... Mar 17 18:48:37.354581 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:48:37.354591 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 18:48:37.354600 systemd[1]: Stopped systemd-fsck-root.service. Mar 17 18:48:37.354611 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 18:48:37.354621 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 18:48:37.354630 systemd[1]: Stopped systemd-journald.service. Mar 17 18:48:37.354640 kernel: loop: module loaded Mar 17 18:48:37.354649 systemd[1]: systemd-journald.service: Consumed 2.920s CPU time. Mar 17 18:48:37.354658 systemd[1]: Starting systemd-journald.service... Mar 17 18:48:37.354668 kernel: fuse: init (API version 7.34) Mar 17 18:48:37.354676 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:48:37.354685 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:48:37.354704 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:48:37.354714 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:48:37.354723 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 18:48:37.354733 systemd[1]: Stopped verity-setup.service. Mar 17 18:48:37.354743 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:48:37.354752 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:48:37.354761 systemd[1]: Mounted media.mount. Mar 17 18:48:37.354770 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:48:37.354783 systemd-journald[1181]: Journal started Mar 17 18:48:37.354822 systemd-journald[1181]: Runtime Journal (/run/log/journal/72405675735643d093f46db449b83ba1) is 8.0M, max 78.5M, 70.5M free. Mar 17 18:48:29.222000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:48:29.881000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:48:29.885000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:48:29.897000 audit: BPF prog-id=10 op=LOAD Mar 17 18:48:29.897000 audit: BPF prog-id=10 op=UNLOAD Mar 17 18:48:29.915000 audit: BPF prog-id=11 op=LOAD Mar 17 18:48:29.915000 audit: BPF prog-id=11 op=UNLOAD Mar 17 18:48:31.002000 audit[1075]: AVC avc: denied { associate } for pid=1075 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:48:31.002000 audit[1075]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40000227f2 a1=4000028ac8 a2=4000026d00 a3=32 items=0 ppid=1058 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:48:31.002000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:48:31.011000 audit[1075]: AVC avc: denied { associate } for pid=1075 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:48:31.011000 audit[1075]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000228c9 a2=1ed a3=0 items=2 ppid=1058 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:48:31.011000 audit: CWD cwd="/" Mar 17 18:48:31.011000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:31.011000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:31.011000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:48:36.626000 audit: BPF prog-id=12 op=LOAD Mar 17 18:48:36.626000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:48:36.632000 audit: BPF prog-id=13 op=LOAD Mar 17 18:48:36.637000 audit: BPF prog-id=14 op=LOAD Mar 17 18:48:36.637000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:48:36.637000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:48:36.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.660000 audit: BPF prog-id=12 op=UNLOAD Mar 17 18:48:36.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.227000 audit: BPF prog-id=15 op=LOAD Mar 17 18:48:37.228000 audit: BPF prog-id=16 op=LOAD Mar 17 18:48:37.228000 audit: BPF prog-id=17 op=LOAD Mar 17 18:48:37.228000 audit: BPF prog-id=13 op=UNLOAD Mar 17 18:48:37.228000 audit: BPF prog-id=14 op=UNLOAD Mar 17 18:48:37.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.351000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:48:37.351000 audit[1181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffe0ea2cb0 a2=4000 a3=1 items=0 ppid=1 pid=1181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:48:37.351000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:48:36.625514 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:48:30.962095 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:48:36.625526 systemd[1]: Unnecessary job was removed for dev-sda6.device. Mar 17 18:48:30.987817 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:30Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:48:36.638926 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 18:48:30.987838 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:30Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:48:36.639266 systemd[1]: systemd-journald.service: Consumed 2.920s CPU time. Mar 17 18:48:30.987878 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:30Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 17 18:48:30.987888 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:30Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 17 18:48:30.987925 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:30Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 17 18:48:30.987937 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:30Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 17 18:48:30.988143 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:30Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 17 18:48:30.988178 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:30Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:48:30.988189 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:30Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:48:30.988630 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:30Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 17 18:48:30.988662 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:30Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 17 18:48:30.988684 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:30Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 17 18:48:30.988698 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:30Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 17 18:48:30.988715 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:30Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 17 18:48:30.988728 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:30Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 17 18:48:35.682077 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:35Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:48:35.682372 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:35Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:48:35.682506 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:35Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:48:35.682704 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:35Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:48:35.682758 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:35Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 17 18:48:35.682823 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-03-17T18:48:35Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 17 18:48:37.363531 systemd[1]: Started systemd-journald.service. Mar 17 18:48:37.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.364032 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:48:37.368331 systemd[1]: Mounted tmp.mount. Mar 17 18:48:37.372321 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:48:37.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.377098 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:48:37.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.381950 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:48:37.382261 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:48:37.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.387390 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:48:37.387526 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:48:37.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.392199 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:48:37.392361 systemd[1]: Finished modprobe@drm.service. Mar 17 18:48:37.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.396979 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:48:37.397263 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:48:37.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.402222 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:48:37.402374 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:48:37.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.406854 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:48:37.406968 systemd[1]: Finished modprobe@loop.service. Mar 17 18:48:37.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.411425 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:48:37.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.416517 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:48:37.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.421750 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:48:37.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.426637 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:48:37.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.431863 systemd[1]: Reached target network-pre.target. Mar 17 18:48:37.437622 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:48:37.443165 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:48:37.447291 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:48:37.470437 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:48:37.475745 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:48:37.480528 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:48:37.481557 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:48:37.485702 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:48:37.486678 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:48:37.491415 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:48:37.497158 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:48:37.503365 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:48:37.508616 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:48:37.518303 udevadm[1195]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 18:48:37.528046 systemd-journald[1181]: Time spent on flushing to /var/log/journal/72405675735643d093f46db449b83ba1 is 13.837ms for 1092 entries. Mar 17 18:48:37.528046 systemd-journald[1181]: System Journal (/var/log/journal/72405675735643d093f46db449b83ba1) is 8.0M, max 2.6G, 2.6G free. Mar 17 18:48:37.601795 systemd-journald[1181]: Received client request to flush runtime journal. Mar 17 18:48:37.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.536349 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:48:37.541130 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:48:37.566276 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:48:37.602790 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:48:37.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.975383 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:48:37.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.981223 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:48:38.329374 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:48:38.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:38.499875 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:48:38.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:38.505000 audit: BPF prog-id=18 op=LOAD Mar 17 18:48:38.505000 audit: BPF prog-id=19 op=LOAD Mar 17 18:48:38.505000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:48:38.505000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:48:38.506209 systemd[1]: Starting systemd-udevd.service... Mar 17 18:48:38.524337 systemd-udevd[1200]: Using default interface naming scheme 'v252'. Mar 17 18:48:38.762291 systemd[1]: Started systemd-udevd.service. Mar 17 18:48:38.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:38.772000 audit: BPF prog-id=20 op=LOAD Mar 17 18:48:38.774658 systemd[1]: Starting systemd-networkd.service... Mar 17 18:48:38.798176 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Mar 17 18:48:38.819000 audit: BPF prog-id=21 op=LOAD Mar 17 18:48:38.819000 audit: BPF prog-id=22 op=LOAD Mar 17 18:48:38.819000 audit: BPF prog-id=23 op=LOAD Mar 17 18:48:38.821100 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:48:38.853481 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 18:48:38.874946 systemd[1]: Started systemd-userdbd.service. Mar 17 18:48:38.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:38.900000 audit[1216]: AVC avc: denied { confidentiality } for pid=1216 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 18:48:38.908487 kernel: hv_vmbus: registering driver hv_balloon Mar 17 18:48:38.908575 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 17 18:48:38.918937 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 17 18:48:38.935489 kernel: hv_vmbus: registering driver hyperv_fb Mar 17 18:48:38.940535 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 17 18:48:38.940622 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 17 18:48:38.955443 kernel: Console: switching to colour dummy device 80x25 Mar 17 18:48:38.958475 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 18:48:38.900000 audit[1216]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaadde33c70 a1=aa2c a2=ffffb86e24b0 a3=aaaaddb8e010 items=12 ppid=1200 pid=1216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:48:38.900000 audit: CWD cwd="/" Mar 17 18:48:38.900000 audit: PATH item=0 name=(null) inode=7232 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:38.900000 audit: PATH item=1 name=(null) inode=9163 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:38.900000 audit: PATH item=2 name=(null) inode=9163 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:38.900000 audit: PATH item=3 name=(null) inode=9164 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:38.900000 audit: PATH item=4 name=(null) inode=9163 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:38.900000 audit: PATH item=5 name=(null) inode=9165 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:38.900000 audit: PATH item=6 name=(null) inode=9163 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:38.900000 audit: PATH item=7 name=(null) inode=9166 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:38.900000 audit: PATH item=8 name=(null) inode=9163 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:38.900000 audit: PATH item=9 name=(null) inode=9167 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:38.900000 audit: PATH item=10 name=(null) inode=9163 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:38.900000 audit: PATH item=11 name=(null) inode=9168 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:38.900000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 18:48:38.984315 kernel: hv_utils: Registering HyperV Utility Driver Mar 17 18:48:38.984411 kernel: hv_vmbus: registering driver hv_utils Mar 17 18:48:38.995476 kernel: hv_utils: Heartbeat IC version 3.0 Mar 17 18:48:38.995578 kernel: hv_utils: Shutdown IC version 3.2 Mar 17 18:48:38.995607 kernel: hv_utils: TimeSync IC version 4.0 Mar 17 18:48:38.681591 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:48:38.766002 systemd-journald[1181]: Time jumped backwards, rotating. Mar 17 18:48:38.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:38.692921 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:48:38.699561 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:48:38.773061 systemd-networkd[1221]: lo: Link UP Mar 17 18:48:38.773072 systemd-networkd[1221]: lo: Gained carrier Mar 17 18:48:38.773444 systemd-networkd[1221]: Enumeration completed Mar 17 18:48:38.773543 systemd[1]: Started systemd-networkd.service. Mar 17 18:48:38.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:38.779234 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:48:38.788192 systemd-networkd[1221]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:48:38.841616 kernel: mlx5_core 44e9:00:02.0 enP17641s1: Link up Mar 17 18:48:38.885594 kernel: hv_netvsc 000d3a07-3780-000d-3a07-3780000d3a07 eth0: Data path switched to VF: enP17641s1 Mar 17 18:48:38.886489 systemd-networkd[1221]: enP17641s1: Link UP Mar 17 18:48:38.886578 systemd-networkd[1221]: eth0: Link UP Mar 17 18:48:38.886596 systemd-networkd[1221]: eth0: Gained carrier Mar 17 18:48:38.892824 systemd-networkd[1221]: enP17641s1: Gained carrier Mar 17 18:48:38.905679 systemd-networkd[1221]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 18:48:39.092041 lvm[1276]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:48:39.138466 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:48:39.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.143372 systemd[1]: Reached target cryptsetup.target. Mar 17 18:48:39.148839 systemd[1]: Starting lvm2-activation.service... Mar 17 18:48:39.152998 lvm[1279]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:48:39.177392 systemd[1]: Finished lvm2-activation.service. Mar 17 18:48:39.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.181866 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:48:39.186386 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:48:39.186417 systemd[1]: Reached target local-fs.target. Mar 17 18:48:39.190533 systemd[1]: Reached target machines.target. Mar 17 18:48:39.196045 systemd[1]: Starting ldconfig.service... Mar 17 18:48:39.199793 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:48:39.199859 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:48:39.200949 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:48:39.205990 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:48:39.212286 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:48:39.217852 systemd[1]: Starting systemd-sysext.service... Mar 17 18:48:39.231812 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1281 (bootctl) Mar 17 18:48:39.232860 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:48:39.600817 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:48:39.874613 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:48:39.874931 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:48:39.918607 kernel: loop0: detected capacity change from 0 to 194096 Mar 17 18:48:39.929154 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:48:39.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.946602 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:48:39.965601 kernel: loop1: detected capacity change from 0 to 194096 Mar 17 18:48:39.970081 (sd-sysext)[1294]: Using extensions 'kubernetes'. Mar 17 18:48:39.970678 (sd-sysext)[1294]: Merged extensions into '/usr'. Mar 17 18:48:39.977795 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:48:39.978369 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:48:39.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.992927 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:48:39.996723 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:48:39.997986 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:48:40.002920 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:48:40.007906 systemd[1]: Starting modprobe@loop.service... Mar 17 18:48:40.011653 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:48:40.011772 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:48:40.013984 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:48:40.018626 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:48:40.018752 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:48:40.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.023394 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:48:40.023507 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:48:40.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.028407 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:48:40.028520 systemd[1]: Finished modprobe@loop.service. Mar 17 18:48:40.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.033235 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:48:40.033331 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:48:40.034285 systemd[1]: Finished systemd-sysext.service. Mar 17 18:48:40.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.039739 systemd[1]: Starting ensure-sysext.service... Mar 17 18:48:40.047886 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:48:40.056665 systemd[1]: Reloading. Mar 17 18:48:40.064450 systemd-fsck[1293]: fsck.fat 4.2 (2021-01-31) Mar 17 18:48:40.064450 systemd-fsck[1293]: /dev/sda1: 236 files, 117179/258078 clusters Mar 17 18:48:40.074442 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:48:40.090607 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:48:40.103765 /usr/lib/systemd/system-generators/torcx-generator[1323]: time="2025-03-17T18:48:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:48:40.104054 /usr/lib/systemd/system-generators/torcx-generator[1323]: time="2025-03-17T18:48:40Z" level=info msg="torcx already run" Mar 17 18:48:40.127576 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:48:40.188710 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:48:40.188728 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:48:40.203887 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:48:40.265000 audit: BPF prog-id=24 op=LOAD Mar 17 18:48:40.265000 audit: BPF prog-id=20 op=UNLOAD Mar 17 18:48:40.266000 audit: BPF prog-id=25 op=LOAD Mar 17 18:48:40.266000 audit: BPF prog-id=26 op=LOAD Mar 17 18:48:40.266000 audit: BPF prog-id=18 op=UNLOAD Mar 17 18:48:40.266000 audit: BPF prog-id=19 op=UNLOAD Mar 17 18:48:40.268000 audit: BPF prog-id=27 op=LOAD Mar 17 18:48:40.268000 audit: BPF prog-id=15 op=UNLOAD Mar 17 18:48:40.268000 audit: BPF prog-id=28 op=LOAD Mar 17 18:48:40.268000 audit: BPF prog-id=29 op=LOAD Mar 17 18:48:40.268000 audit: BPF prog-id=16 op=UNLOAD Mar 17 18:48:40.268000 audit: BPF prog-id=17 op=UNLOAD Mar 17 18:48:40.269000 audit: BPF prog-id=30 op=LOAD Mar 17 18:48:40.269000 audit: BPF prog-id=21 op=UNLOAD Mar 17 18:48:40.269000 audit: BPF prog-id=31 op=LOAD Mar 17 18:48:40.269000 audit: BPF prog-id=32 op=LOAD Mar 17 18:48:40.269000 audit: BPF prog-id=22 op=UNLOAD Mar 17 18:48:40.269000 audit: BPF prog-id=23 op=UNLOAD Mar 17 18:48:40.272100 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:48:40.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.283385 systemd[1]: Mounting boot.mount... Mar 17 18:48:40.290460 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:48:40.291641 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:48:40.296483 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:48:40.301931 systemd[1]: Starting modprobe@loop.service... Mar 17 18:48:40.305938 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:48:40.306065 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:48:40.308243 systemd[1]: Mounted boot.mount. Mar 17 18:48:40.312282 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:48:40.312426 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:48:40.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.317917 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:48:40.318045 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:48:40.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.323246 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:48:40.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.328077 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:48:40.328191 systemd[1]: Finished modprobe@loop.service. Mar 17 18:48:40.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.334151 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:48:40.335429 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:48:40.340984 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:48:40.346161 systemd[1]: Starting modprobe@loop.service... Mar 17 18:48:40.349920 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:48:40.350046 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:48:40.350828 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:48:40.350975 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:48:40.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.355751 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:48:40.355871 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:48:40.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.360811 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:48:40.360930 systemd[1]: Finished modprobe@loop.service. Mar 17 18:48:40.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.368124 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:48:40.369490 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:48:40.374308 systemd[1]: Starting modprobe@drm.service... Mar 17 18:48:40.379013 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:48:40.384239 systemd[1]: Starting modprobe@loop.service... Mar 17 18:48:40.387994 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:48:40.388119 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:48:40.389001 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:48:40.389128 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:48:40.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.393848 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:48:40.393960 systemd[1]: Finished modprobe@drm.service. Mar 17 18:48:40.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.398605 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:48:40.398717 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:48:40.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.403560 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:48:40.403682 systemd[1]: Finished modprobe@loop.service. Mar 17 18:48:40.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.408491 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:48:40.408589 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:48:40.409858 systemd[1]: Finished ensure-sysext.service. Mar 17 18:48:40.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.578685 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:48:40.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.585183 systemd[1]: Starting audit-rules.service... Mar 17 18:48:40.590332 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:48:40.595807 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:48:40.600000 audit: BPF prog-id=33 op=LOAD Mar 17 18:48:40.602331 systemd[1]: Starting systemd-resolved.service... Mar 17 18:48:40.608000 audit: BPF prog-id=34 op=LOAD Mar 17 18:48:40.609566 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:48:40.614636 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:48:40.644000 audit[1402]: SYSTEM_BOOT pid=1402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.647924 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:48:40.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.687952 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:48:40.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.692964 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:48:40.722785 systemd-networkd[1221]: eth0: Gained IPv6LL Mar 17 18:48:40.725780 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:48:40.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.730625 systemd[1]: Reached target time-set.target. Mar 17 18:48:40.735007 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:48:40.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.757437 systemd-resolved[1399]: Positive Trust Anchors: Mar 17 18:48:40.757450 systemd-resolved[1399]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:48:40.757477 systemd-resolved[1399]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:48:40.813691 systemd-resolved[1399]: Using system hostname 'ci-3510.3.7-a-ffee15dd16'. Mar 17 18:48:40.815161 systemd[1]: Started systemd-resolved.service. Mar 17 18:48:40.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.819958 systemd[1]: Reached target network.target. Mar 17 18:48:40.824602 systemd[1]: Reached target network-online.target. Mar 17 18:48:40.829497 systemd[1]: Reached target nss-lookup.target. Mar 17 18:48:40.880188 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:48:40.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.967000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:48:40.967000 audit[1417]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd5106330 a2=420 a3=0 items=0 ppid=1396 pid=1417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:48:40.967000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:48:40.968404 augenrules[1417]: No rules Mar 17 18:48:40.969266 systemd[1]: Finished audit-rules.service. Mar 17 18:48:40.989619 systemd-timesyncd[1401]: Contacted time server 104.171.113.34:123 (0.flatcar.pool.ntp.org). Mar 17 18:48:40.989684 systemd-timesyncd[1401]: Initial clock synchronization to Mon 2025-03-17 18:48:40.984560 UTC. Mar 17 18:48:46.082787 ldconfig[1280]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:48:46.097146 systemd[1]: Finished ldconfig.service. Mar 17 18:48:46.103127 systemd[1]: Starting systemd-update-done.service... Mar 17 18:48:46.131002 systemd[1]: Finished systemd-update-done.service. Mar 17 18:48:46.135795 systemd[1]: Reached target sysinit.target. Mar 17 18:48:46.140080 systemd[1]: Started motdgen.path. Mar 17 18:48:46.144015 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:48:46.150009 systemd[1]: Started logrotate.timer. Mar 17 18:48:46.153889 systemd[1]: Started mdadm.timer. Mar 17 18:48:46.157761 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:48:46.162349 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:48:46.162385 systemd[1]: Reached target paths.target. Mar 17 18:48:46.166454 systemd[1]: Reached target timers.target. Mar 17 18:48:46.171447 systemd[1]: Listening on dbus.socket. Mar 17 18:48:46.176861 systemd[1]: Starting docker.socket... Mar 17 18:48:46.183025 systemd[1]: Listening on sshd.socket. Mar 17 18:48:46.187058 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:48:46.187482 systemd[1]: Listening on docker.socket. Mar 17 18:48:46.191894 systemd[1]: Reached target sockets.target. Mar 17 18:48:46.195924 systemd[1]: Reached target basic.target. Mar 17 18:48:46.199994 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:48:46.200023 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:48:46.200965 systemd[1]: Starting containerd.service... Mar 17 18:48:46.205537 systemd[1]: Starting dbus.service... Mar 17 18:48:46.209542 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:48:46.214630 systemd[1]: Starting extend-filesystems.service... Mar 17 18:48:46.218654 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:48:46.222803 systemd[1]: Starting kubelet.service... Mar 17 18:48:46.227193 systemd[1]: Starting motdgen.service... Mar 17 18:48:46.231641 systemd[1]: Started nvidia.service. Mar 17 18:48:46.237156 systemd[1]: Starting prepare-helm.service... Mar 17 18:48:46.241947 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:48:46.248117 systemd[1]: Starting sshd-keygen.service... Mar 17 18:48:46.256543 systemd[1]: Starting systemd-logind.service... Mar 17 18:48:46.261958 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:48:46.262028 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:48:46.262464 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 18:48:46.263134 systemd[1]: Starting update-engine.service... Mar 17 18:48:46.266986 jq[1427]: false Mar 17 18:48:46.268090 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:48:46.271427 jq[1445]: true Mar 17 18:48:46.277759 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:48:46.277934 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:48:46.283914 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:48:46.284309 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:48:46.299724 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:48:46.299918 systemd[1]: Finished motdgen.service. Mar 17 18:48:46.313471 extend-filesystems[1428]: Found loop1 Mar 17 18:48:46.317831 extend-filesystems[1428]: Found sda Mar 17 18:48:46.317831 extend-filesystems[1428]: Found sda1 Mar 17 18:48:46.317831 extend-filesystems[1428]: Found sda2 Mar 17 18:48:46.317831 extend-filesystems[1428]: Found sda3 Mar 17 18:48:46.317831 extend-filesystems[1428]: Found usr Mar 17 18:48:46.317831 extend-filesystems[1428]: Found sda4 Mar 17 18:48:46.317831 extend-filesystems[1428]: Found sda6 Mar 17 18:48:46.317831 extend-filesystems[1428]: Found sda7 Mar 17 18:48:46.317831 extend-filesystems[1428]: Found sda9 Mar 17 18:48:46.317831 extend-filesystems[1428]: Checking size of /dev/sda9 Mar 17 18:48:46.435113 tar[1448]: linux-arm64/helm Mar 17 18:48:46.435316 env[1455]: time="2025-03-17T18:48:46.403805299Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:48:46.366305 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 18:48:46.435576 jq[1450]: true Mar 17 18:48:46.435678 extend-filesystems[1428]: Old size kept for /dev/sda9 Mar 17 18:48:46.435678 extend-filesystems[1428]: Found sr0 Mar 17 18:48:46.368772 systemd-logind[1442]: New seat seat0. Mar 17 18:48:46.419825 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:48:46.419988 systemd[1]: Finished extend-filesystems.service. Mar 17 18:48:46.471322 bash[1478]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:48:46.471934 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:48:46.500258 env[1455]: time="2025-03-17T18:48:46.500204036Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:48:46.500378 env[1455]: time="2025-03-17T18:48:46.500362440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:48:46.506228 dbus-daemon[1426]: [system] SELinux support is enabled Mar 17 18:48:46.506405 systemd[1]: Started dbus.service. Mar 17 18:48:46.517643 env[1455]: time="2025-03-17T18:48:46.507190631Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:48:46.517643 env[1455]: time="2025-03-17T18:48:46.507223184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:48:46.517643 env[1455]: time="2025-03-17T18:48:46.507449453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:48:46.517643 env[1455]: time="2025-03-17T18:48:46.507471328Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:48:46.517643 env[1455]: time="2025-03-17T18:48:46.507490684Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:48:46.517643 env[1455]: time="2025-03-17T18:48:46.507503561Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:48:46.517643 env[1455]: time="2025-03-17T18:48:46.507571026Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:48:46.517643 env[1455]: time="2025-03-17T18:48:46.507791977Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:48:46.517643 env[1455]: time="2025-03-17T18:48:46.507903632Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:48:46.517643 env[1455]: time="2025-03-17T18:48:46.507918068Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:48:46.512171 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:48:46.517867 env[1455]: time="2025-03-17T18:48:46.507963698Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:48:46.517867 env[1455]: time="2025-03-17T18:48:46.507975136Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:48:46.512191 systemd[1]: Reached target system-config.target. Mar 17 18:48:46.519883 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:48:46.519905 systemd[1]: Reached target user-config.target. Mar 17 18:48:46.527412 systemd[1]: Started systemd-logind.service. Mar 17 18:48:46.537122 env[1455]: time="2025-03-17T18:48:46.537081059Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:48:46.537122 env[1455]: time="2025-03-17T18:48:46.537125169Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:48:46.537233 env[1455]: time="2025-03-17T18:48:46.537142245Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:48:46.537233 env[1455]: time="2025-03-17T18:48:46.537193154Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:48:46.537233 env[1455]: time="2025-03-17T18:48:46.537208110Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:48:46.537233 env[1455]: time="2025-03-17T18:48:46.537222307Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:48:46.537328 env[1455]: time="2025-03-17T18:48:46.537299130Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:48:46.537728 env[1455]: time="2025-03-17T18:48:46.537704479Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:48:46.537728 env[1455]: time="2025-03-17T18:48:46.537729154Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:48:46.537809 env[1455]: time="2025-03-17T18:48:46.537743111Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:48:46.537809 env[1455]: time="2025-03-17T18:48:46.537755748Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:48:46.537809 env[1455]: time="2025-03-17T18:48:46.537770625Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:48:46.537922 env[1455]: time="2025-03-17T18:48:46.537897756Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:48:46.538043 env[1455]: time="2025-03-17T18:48:46.538019529Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:48:46.538362 env[1455]: time="2025-03-17T18:48:46.538337498Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:48:46.538414 env[1455]: time="2025-03-17T18:48:46.538370090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.538414 env[1455]: time="2025-03-17T18:48:46.538387486Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:48:46.538463 env[1455]: time="2025-03-17T18:48:46.538435196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.538463 env[1455]: time="2025-03-17T18:48:46.538448473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.538503 env[1455]: time="2025-03-17T18:48:46.538461430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.538503 env[1455]: time="2025-03-17T18:48:46.538473267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.538565 env[1455]: time="2025-03-17T18:48:46.538544531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.538565 env[1455]: time="2025-03-17T18:48:46.538563127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.538788 env[1455]: time="2025-03-17T18:48:46.538575644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.538788 env[1455]: time="2025-03-17T18:48:46.538603718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.538788 env[1455]: time="2025-03-17T18:48:46.538617995Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:48:46.538788 env[1455]: time="2025-03-17T18:48:46.538757604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.538788 env[1455]: time="2025-03-17T18:48:46.538776759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.538788 env[1455]: time="2025-03-17T18:48:46.538789996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.538912 env[1455]: time="2025-03-17T18:48:46.538802713Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:48:46.538912 env[1455]: time="2025-03-17T18:48:46.538817470Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:48:46.538912 env[1455]: time="2025-03-17T18:48:46.538827988Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:48:46.538912 env[1455]: time="2025-03-17T18:48:46.538849423Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:48:46.538912 env[1455]: time="2025-03-17T18:48:46.538885095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.539135 env[1455]: time="2025-03-17T18:48:46.539080531Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:48:46.557863 env[1455]: time="2025-03-17T18:48:46.539137319Z" level=info msg="Connect containerd service" Mar 17 18:48:46.557863 env[1455]: time="2025-03-17T18:48:46.539168352Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:48:46.557863 env[1455]: time="2025-03-17T18:48:46.539786973Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:48:46.557863 env[1455]: time="2025-03-17T18:48:46.539900508Z" level=info msg="Start subscribing containerd event" Mar 17 18:48:46.557863 env[1455]: time="2025-03-17T18:48:46.539939939Z" level=info msg="Start recovering state" Mar 17 18:48:46.557863 env[1455]: time="2025-03-17T18:48:46.539993607Z" level=info msg="Start event monitor" Mar 17 18:48:46.557863 env[1455]: time="2025-03-17T18:48:46.540009723Z" level=info msg="Start snapshots syncer" Mar 17 18:48:46.557863 env[1455]: time="2025-03-17T18:48:46.540018281Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:48:46.557863 env[1455]: time="2025-03-17T18:48:46.540026399Z" level=info msg="Start streaming server" Mar 17 18:48:46.557863 env[1455]: time="2025-03-17T18:48:46.540302138Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:48:46.557863 env[1455]: time="2025-03-17T18:48:46.540354566Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:48:46.548545 systemd[1]: nvidia.service: Deactivated successfully. Mar 17 18:48:46.566793 env[1455]: time="2025-03-17T18:48:46.566755055Z" level=info msg="containerd successfully booted in 0.163630s" Mar 17 18:48:46.574109 systemd[1]: Started containerd.service. Mar 17 18:48:46.850443 update_engine[1444]: I0317 18:48:46.837447 1444 main.cc:92] Flatcar Update Engine starting Mar 17 18:48:46.895853 systemd[1]: Started update-engine.service. Mar 17 18:48:46.902443 update_engine[1444]: I0317 18:48:46.895893 1444 update_check_scheduler.cc:74] Next update check in 9m53s Mar 17 18:48:46.903795 systemd[1]: Started locksmithd.service. Mar 17 18:48:46.914549 tar[1448]: linux-arm64/LICENSE Mar 17 18:48:46.914744 tar[1448]: linux-arm64/README.md Mar 17 18:48:46.919160 systemd[1]: Finished prepare-helm.service. Mar 17 18:48:47.163080 systemd[1]: Started kubelet.service. Mar 17 18:48:47.570535 kubelet[1532]: E0317 18:48:47.570500 1532 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:48:47.572574 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:48:47.572721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:48:48.013025 sshd_keygen[1443]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:48:48.029689 systemd[1]: Finished sshd-keygen.service. Mar 17 18:48:48.035835 systemd[1]: Starting issuegen.service... Mar 17 18:48:48.040651 systemd[1]: Started waagent.service. Mar 17 18:48:48.045157 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:48:48.045329 systemd[1]: Finished issuegen.service. Mar 17 18:48:48.051076 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:48:48.073299 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:48:48.079409 systemd[1]: Started getty@tty1.service. Mar 17 18:48:48.084957 systemd[1]: Started serial-getty@ttyAMA0.service. Mar 17 18:48:48.089810 systemd[1]: Reached target getty.target. Mar 17 18:48:48.093904 systemd[1]: Reached target multi-user.target. Mar 17 18:48:48.099575 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:48:48.106993 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:48:48.107136 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:48:48.112340 systemd[1]: Startup finished in 727ms (kernel) + 12.148s (initrd) + 19.760s (userspace) = 32.636s. Mar 17 18:48:48.224289 locksmithd[1528]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:48:48.744085 login[1556]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Mar 17 18:48:48.744760 login[1555]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 18:48:48.791356 systemd[1]: Created slice user-500.slice. Mar 17 18:48:48.792459 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:48:48.796311 systemd-logind[1442]: New session 2 of user core. Mar 17 18:48:48.814212 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:48:48.815562 systemd[1]: Starting user@500.service... Mar 17 18:48:48.831310 (systemd)[1560]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:49.010406 systemd[1560]: Queued start job for default target default.target. Mar 17 18:48:49.011308 systemd[1560]: Reached target paths.target. Mar 17 18:48:49.011341 systemd[1560]: Reached target sockets.target. Mar 17 18:48:49.011353 systemd[1560]: Reached target timers.target. Mar 17 18:48:49.011364 systemd[1560]: Reached target basic.target. Mar 17 18:48:49.011462 systemd[1]: Started user@500.service. Mar 17 18:48:49.012318 systemd[1]: Started session-2.scope. Mar 17 18:48:49.013954 systemd[1560]: Reached target default.target. Mar 17 18:48:49.014941 systemd[1560]: Startup finished in 177ms. Mar 17 18:48:49.745621 login[1556]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 18:48:49.750060 systemd[1]: Started session-1.scope. Mar 17 18:48:49.751126 systemd-logind[1442]: New session 1 of user core. Mar 17 18:48:53.965141 waagent[1553]: 2025-03-17T18:48:53.965022Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Mar 17 18:48:53.997163 waagent[1553]: 2025-03-17T18:48:53.997074Z INFO Daemon Daemon OS: flatcar 3510.3.7 Mar 17 18:48:54.002297 waagent[1553]: 2025-03-17T18:48:54.002234Z INFO Daemon Daemon Python: 3.9.16 Mar 17 18:48:54.008004 waagent[1553]: 2025-03-17T18:48:54.007932Z INFO Daemon Daemon Run daemon Mar 17 18:48:54.012976 waagent[1553]: 2025-03-17T18:48:54.012904Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.7' Mar 17 18:48:54.030199 waagent[1553]: 2025-03-17T18:48:54.030075Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Mar 17 18:48:54.046032 waagent[1553]: 2025-03-17T18:48:54.045905Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 17 18:48:54.058082 waagent[1553]: 2025-03-17T18:48:54.058012Z INFO Daemon Daemon cloud-init is enabled: False Mar 17 18:48:54.063597 waagent[1553]: 2025-03-17T18:48:54.063528Z INFO Daemon Daemon Using waagent for provisioning Mar 17 18:48:54.069699 waagent[1553]: 2025-03-17T18:48:54.069639Z INFO Daemon Daemon Activate resource disk Mar 17 18:48:54.075027 waagent[1553]: 2025-03-17T18:48:54.074959Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 17 18:48:54.090074 waagent[1553]: 2025-03-17T18:48:54.090013Z INFO Daemon Daemon Found device: None Mar 17 18:48:54.095057 waagent[1553]: 2025-03-17T18:48:54.094995Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 17 18:48:54.103859 waagent[1553]: 2025-03-17T18:48:54.103801Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 17 18:48:54.118054 waagent[1553]: 2025-03-17T18:48:54.117992Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 18:48:54.124484 waagent[1553]: 2025-03-17T18:48:54.124426Z INFO Daemon Daemon Running default provisioning handler Mar 17 18:48:54.137120 waagent[1553]: 2025-03-17T18:48:54.137004Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Mar 17 18:48:54.151730 waagent[1553]: 2025-03-17T18:48:54.151615Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 17 18:48:54.161660 waagent[1553]: 2025-03-17T18:48:54.161596Z INFO Daemon Daemon cloud-init is enabled: False Mar 17 18:48:54.167530 waagent[1553]: 2025-03-17T18:48:54.167467Z INFO Daemon Daemon Copying ovf-env.xml Mar 17 18:48:54.263488 waagent[1553]: 2025-03-17T18:48:54.263299Z INFO Daemon Daemon Successfully mounted dvd Mar 17 18:48:54.408276 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 17 18:48:54.450735 waagent[1553]: 2025-03-17T18:48:54.450559Z INFO Daemon Daemon Detect protocol endpoint Mar 17 18:48:54.456224 waagent[1553]: 2025-03-17T18:48:54.456150Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 18:48:54.462276 waagent[1553]: 2025-03-17T18:48:54.462210Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 17 18:48:54.469564 waagent[1553]: 2025-03-17T18:48:54.469503Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 17 18:48:54.475043 waagent[1553]: 2025-03-17T18:48:54.474983Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 17 18:48:54.481121 waagent[1553]: 2025-03-17T18:48:54.481060Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 17 18:48:54.615102 waagent[1553]: 2025-03-17T18:48:54.615032Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 17 18:48:54.623090 waagent[1553]: 2025-03-17T18:48:54.623046Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 17 18:48:54.629131 waagent[1553]: 2025-03-17T18:48:54.629071Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 17 18:48:56.026298 waagent[1553]: 2025-03-17T18:48:56.026141Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 17 18:48:56.043062 waagent[1553]: 2025-03-17T18:48:56.042980Z INFO Daemon Daemon Forcing an update of the goal state.. Mar 17 18:48:56.049043 waagent[1553]: 2025-03-17T18:48:56.048967Z INFO Daemon Daemon Fetching goal state [incarnation 1] Mar 17 18:48:56.145116 waagent[1553]: 2025-03-17T18:48:56.144980Z INFO Daemon Daemon Found private key matching thumbprint 4B4C2F9F762367CFC53E29AA17BBDCE1C67FEE07 Mar 17 18:48:56.153613 waagent[1553]: 2025-03-17T18:48:56.153516Z INFO Daemon Daemon Certificate with thumbprint 252FD62682B7ADCB1977719C51EAE11DBE1D43BE has no matching private key. Mar 17 18:48:56.163172 waagent[1553]: 2025-03-17T18:48:56.163082Z INFO Daemon Daemon Fetch goal state completed Mar 17 18:48:56.234158 waagent[1553]: 2025-03-17T18:48:56.234098Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 7d2904e1-22c2-4509-bf10-534cfc81072e New eTag: 15221520380855474663] Mar 17 18:48:56.245887 waagent[1553]: 2025-03-17T18:48:56.245795Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Mar 17 18:48:56.262080 waagent[1553]: 2025-03-17T18:48:56.262013Z INFO Daemon Daemon Starting provisioning Mar 17 18:48:56.267330 waagent[1553]: 2025-03-17T18:48:56.267256Z INFO Daemon Daemon Handle ovf-env.xml. Mar 17 18:48:56.273337 waagent[1553]: 2025-03-17T18:48:56.273268Z INFO Daemon Daemon Set hostname [ci-3510.3.7-a-ffee15dd16] Mar 17 18:48:56.308357 waagent[1553]: 2025-03-17T18:48:56.308230Z INFO Daemon Daemon Publish hostname [ci-3510.3.7-a-ffee15dd16] Mar 17 18:48:56.314822 waagent[1553]: 2025-03-17T18:48:56.314752Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 17 18:48:56.321375 waagent[1553]: 2025-03-17T18:48:56.321316Z INFO Daemon Daemon Primary interface is [eth0] Mar 17 18:48:56.337686 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Mar 17 18:48:56.337871 systemd[1]: Stopped systemd-networkd-wait-online.service. Mar 17 18:48:56.337934 systemd[1]: Stopping systemd-networkd-wait-online.service... Mar 17 18:48:56.338186 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:48:56.343644 systemd-networkd[1221]: eth0: DHCPv6 lease lost Mar 17 18:48:56.345434 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:48:56.345624 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:48:56.347627 systemd[1]: Starting systemd-networkd.service... Mar 17 18:48:56.375624 systemd-networkd[1607]: enP17641s1: Link UP Mar 17 18:48:56.375634 systemd-networkd[1607]: enP17641s1: Gained carrier Mar 17 18:48:56.376472 systemd-networkd[1607]: eth0: Link UP Mar 17 18:48:56.376483 systemd-networkd[1607]: eth0: Gained carrier Mar 17 18:48:56.376805 systemd-networkd[1607]: lo: Link UP Mar 17 18:48:56.376815 systemd-networkd[1607]: lo: Gained carrier Mar 17 18:48:56.377041 systemd-networkd[1607]: eth0: Gained IPv6LL Mar 17 18:48:56.377455 systemd-networkd[1607]: Enumeration completed Mar 17 18:48:56.377553 systemd[1]: Started systemd-networkd.service. Mar 17 18:48:56.379154 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:48:56.380167 systemd-networkd[1607]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:48:56.386506 waagent[1553]: 2025-03-17T18:48:56.386358Z INFO Daemon Daemon Create user account if not exists Mar 17 18:48:56.393160 waagent[1553]: 2025-03-17T18:48:56.393085Z INFO Daemon Daemon User core already exists, skip useradd Mar 17 18:48:56.399859 waagent[1553]: 2025-03-17T18:48:56.399790Z INFO Daemon Daemon Configure sudoer Mar 17 18:48:56.406687 systemd-networkd[1607]: eth0: DHCPv4 address 10.200.20.36/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 18:48:56.409642 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:48:56.411135 waagent[1553]: 2025-03-17T18:48:56.411042Z INFO Daemon Daemon Configure sshd Mar 17 18:48:56.416142 waagent[1553]: 2025-03-17T18:48:56.416061Z INFO Daemon Daemon Deploy ssh public key. Mar 17 18:48:57.770094 waagent[1553]: 2025-03-17T18:48:57.770017Z INFO Daemon Daemon Provisioning complete Mar 17 18:48:57.775040 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:48:57.775200 systemd[1]: Stopped kubelet.service. Mar 17 18:48:57.776598 systemd[1]: Starting kubelet.service... Mar 17 18:48:57.792793 waagent[1553]: 2025-03-17T18:48:57.792722Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 17 18:48:57.800091 waagent[1553]: 2025-03-17T18:48:57.799995Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 17 18:48:57.811788 waagent[1553]: 2025-03-17T18:48:57.811690Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Mar 17 18:48:57.859932 systemd[1]: Started kubelet.service. Mar 17 18:48:57.968636 kubelet[1621]: E0317 18:48:57.968599 1621 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:48:57.971557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:48:57.971708 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:48:58.118751 waagent[1618]: 2025-03-17T18:48:58.118645Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Mar 17 18:48:58.119557 waagent[1618]: 2025-03-17T18:48:58.119491Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:48:58.119713 waagent[1618]: 2025-03-17T18:48:58.119665Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:48:58.132231 waagent[1618]: 2025-03-17T18:48:58.132157Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Mar 17 18:48:58.132408 waagent[1618]: 2025-03-17T18:48:58.132358Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Mar 17 18:48:58.198565 waagent[1618]: 2025-03-17T18:48:58.198426Z INFO ExtHandler ExtHandler Found private key matching thumbprint 4B4C2F9F762367CFC53E29AA17BBDCE1C67FEE07 Mar 17 18:48:58.198790 waagent[1618]: 2025-03-17T18:48:58.198735Z INFO ExtHandler ExtHandler Certificate with thumbprint 252FD62682B7ADCB1977719C51EAE11DBE1D43BE has no matching private key. Mar 17 18:48:58.199016 waagent[1618]: 2025-03-17T18:48:58.198967Z INFO ExtHandler ExtHandler Fetch goal state completed Mar 17 18:48:58.212576 waagent[1618]: 2025-03-17T18:48:58.212520Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: d11e4d7f-4403-414a-907a-0b01d5ae7eb5 New eTag: 15221520380855474663] Mar 17 18:48:58.213110 waagent[1618]: 2025-03-17T18:48:58.213049Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Mar 17 18:48:58.266777 waagent[1618]: 2025-03-17T18:48:58.266634Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 17 18:48:58.276809 waagent[1618]: 2025-03-17T18:48:58.276732Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1618 Mar 17 18:48:58.280493 waagent[1618]: 2025-03-17T18:48:58.280428Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] Mar 17 18:48:58.281844 waagent[1618]: 2025-03-17T18:48:58.281788Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 17 18:48:58.437990 waagent[1618]: 2025-03-17T18:48:58.437874Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 17 18:48:58.438346 waagent[1618]: 2025-03-17T18:48:58.438284Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 17 18:48:58.446440 waagent[1618]: 2025-03-17T18:48:58.446375Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 17 18:48:58.446947 waagent[1618]: 2025-03-17T18:48:58.446890Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Mar 17 18:48:58.448087 waagent[1618]: 2025-03-17T18:48:58.448021Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Mar 17 18:48:58.449413 waagent[1618]: 2025-03-17T18:48:58.449342Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 17 18:48:58.450033 waagent[1618]: 2025-03-17T18:48:58.449972Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:48:58.450302 waagent[1618]: 2025-03-17T18:48:58.450251Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:48:58.451059 waagent[1618]: 2025-03-17T18:48:58.450990Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 17 18:48:58.451481 waagent[1618]: 2025-03-17T18:48:58.451423Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 17 18:48:58.451481 waagent[1618]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 17 18:48:58.451481 waagent[1618]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 17 18:48:58.451481 waagent[1618]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 17 18:48:58.451481 waagent[1618]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:48:58.451481 waagent[1618]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:48:58.451481 waagent[1618]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:48:58.453787 waagent[1618]: 2025-03-17T18:48:58.453628Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 17 18:48:58.454700 waagent[1618]: 2025-03-17T18:48:58.454633Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:48:58.454993 waagent[1618]: 2025-03-17T18:48:58.454937Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:48:58.455661 waagent[1618]: 2025-03-17T18:48:58.455571Z INFO EnvHandler ExtHandler Configure routes Mar 17 18:48:58.455906 waagent[1618]: 2025-03-17T18:48:58.455856Z INFO EnvHandler ExtHandler Gateway:None Mar 17 18:48:58.456108 waagent[1618]: 2025-03-17T18:48:58.456063Z INFO EnvHandler ExtHandler Routes:None Mar 17 18:48:58.456755 waagent[1618]: 2025-03-17T18:48:58.456679Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 17 18:48:58.457083 waagent[1618]: 2025-03-17T18:48:58.457024Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 17 18:48:58.457983 waagent[1618]: 2025-03-17T18:48:58.457911Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 17 18:48:58.458083 waagent[1618]: 2025-03-17T18:48:58.458019Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 17 18:48:58.458693 waagent[1618]: 2025-03-17T18:48:58.458617Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 17 18:48:58.469841 waagent[1618]: 2025-03-17T18:48:58.469771Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Mar 17 18:48:58.470455 waagent[1618]: 2025-03-17T18:48:58.470399Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Mar 17 18:48:58.471514 waagent[1618]: 2025-03-17T18:48:58.471450Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Mar 17 18:48:58.506773 waagent[1618]: 2025-03-17T18:48:58.506632Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1607' Mar 17 18:48:58.518168 waagent[1618]: 2025-03-17T18:48:58.518104Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Mar 17 18:48:58.596052 waagent[1618]: 2025-03-17T18:48:58.595928Z INFO MonitorHandler ExtHandler Network interfaces: Mar 17 18:48:58.596052 waagent[1618]: Executing ['ip', '-a', '-o', 'link']: Mar 17 18:48:58.596052 waagent[1618]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 17 18:48:58.596052 waagent[1618]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:07:37:80 brd ff:ff:ff:ff:ff:ff Mar 17 18:48:58.596052 waagent[1618]: 3: enP17641s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:07:37:80 brd ff:ff:ff:ff:ff:ff\ altname enP17641p0s2 Mar 17 18:48:58.596052 waagent[1618]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 17 18:48:58.596052 waagent[1618]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 17 18:48:58.596052 waagent[1618]: 2: eth0 inet 10.200.20.36/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 17 18:48:58.596052 waagent[1618]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 17 18:48:58.596052 waagent[1618]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Mar 17 18:48:58.596052 waagent[1618]: 2: eth0 inet6 fe80::20d:3aff:fe07:3780/64 scope link \ valid_lft forever preferred_lft forever Mar 17 18:48:58.730925 waagent[1618]: 2025-03-17T18:48:58.730821Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Mar 17 18:48:58.815712 waagent[1553]: 2025-03-17T18:48:58.815577Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Mar 17 18:48:58.820541 waagent[1553]: 2025-03-17T18:48:58.820492Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Mar 17 18:49:00.051916 waagent[1656]: 2025-03-17T18:49:00.051817Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Mar 17 18:49:00.052646 waagent[1656]: 2025-03-17T18:49:00.052550Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.7 Mar 17 18:49:00.052795 waagent[1656]: 2025-03-17T18:49:00.052745Z INFO ExtHandler ExtHandler Python: 3.9.16 Mar 17 18:49:00.052922 waagent[1656]: 2025-03-17T18:49:00.052878Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Mar 17 18:49:00.060647 waagent[1656]: 2025-03-17T18:49:00.060509Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 17 18:49:00.061033 waagent[1656]: 2025-03-17T18:49:00.060972Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:49:00.061180 waagent[1656]: 2025-03-17T18:49:00.061135Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:49:00.073860 waagent[1656]: 2025-03-17T18:49:00.073789Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 17 18:49:00.082764 waagent[1656]: 2025-03-17T18:49:00.082708Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Mar 17 18:49:00.083772 waagent[1656]: 2025-03-17T18:49:00.083713Z INFO ExtHandler Mar 17 18:49:00.083920 waagent[1656]: 2025-03-17T18:49:00.083872Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b181c113-62e6-456f-86ff-bf11f221efc0 eTag: 15221520380855474663 source: Fabric] Mar 17 18:49:00.084658 waagent[1656]: 2025-03-17T18:49:00.084572Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 17 18:49:00.085889 waagent[1656]: 2025-03-17T18:49:00.085826Z INFO ExtHandler Mar 17 18:49:00.086032 waagent[1656]: 2025-03-17T18:49:00.085976Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 17 18:49:00.094280 waagent[1656]: 2025-03-17T18:49:00.094229Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 17 18:49:00.094767 waagent[1656]: 2025-03-17T18:49:00.094711Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Mar 17 18:49:00.120303 waagent[1656]: 2025-03-17T18:49:00.120246Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Mar 17 18:49:00.190695 waagent[1656]: 2025-03-17T18:49:00.190521Z INFO ExtHandler Downloaded certificate {'thumbprint': '4B4C2F9F762367CFC53E29AA17BBDCE1C67FEE07', 'hasPrivateKey': True} Mar 17 18:49:00.191763 waagent[1656]: 2025-03-17T18:49:00.191686Z INFO ExtHandler Downloaded certificate {'thumbprint': '252FD62682B7ADCB1977719C51EAE11DBE1D43BE', 'hasPrivateKey': False} Mar 17 18:49:00.192833 waagent[1656]: 2025-03-17T18:49:00.192770Z INFO ExtHandler Fetch goal state completed Mar 17 18:49:00.212960 waagent[1656]: 2025-03-17T18:49:00.212847Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Mar 17 18:49:00.225097 waagent[1656]: 2025-03-17T18:49:00.224988Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1656 Mar 17 18:49:00.228375 waagent[1656]: 2025-03-17T18:49:00.228301Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] Mar 17 18:49:00.229493 waagent[1656]: 2025-03-17T18:49:00.229433Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Mar 17 18:49:00.229818 waagent[1656]: 2025-03-17T18:49:00.229759Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Mar 17 18:49:00.231926 waagent[1656]: 2025-03-17T18:49:00.231865Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 17 18:49:00.236792 waagent[1656]: 2025-03-17T18:49:00.236728Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 17 18:49:00.237215 waagent[1656]: 2025-03-17T18:49:00.237144Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 17 18:49:00.245345 waagent[1656]: 2025-03-17T18:49:00.245283Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 17 18:49:00.245854 waagent[1656]: 2025-03-17T18:49:00.245797Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Mar 17 18:49:00.251601 waagent[1656]: 2025-03-17T18:49:00.251478Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 17 18:49:00.252682 waagent[1656]: 2025-03-17T18:49:00.252611Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Mar 17 18:49:00.254241 waagent[1656]: 2025-03-17T18:49:00.254166Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 17 18:49:00.254791 waagent[1656]: 2025-03-17T18:49:00.254728Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:49:00.255068 waagent[1656]: 2025-03-17T18:49:00.255016Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:49:00.255796 waagent[1656]: 2025-03-17T18:49:00.255726Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 17 18:49:00.256477 waagent[1656]: 2025-03-17T18:49:00.256404Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 17 18:49:00.257073 waagent[1656]: 2025-03-17T18:49:00.257003Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:49:00.257229 waagent[1656]: 2025-03-17T18:49:00.257170Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 17 18:49:00.257229 waagent[1656]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 17 18:49:00.257229 waagent[1656]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 17 18:49:00.257229 waagent[1656]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 17 18:49:00.257229 waagent[1656]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:49:00.257229 waagent[1656]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:49:00.257229 waagent[1656]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:49:00.257983 waagent[1656]: 2025-03-17T18:49:00.257726Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 17 18:49:00.258050 waagent[1656]: 2025-03-17T18:49:00.257987Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 17 18:49:00.259848 waagent[1656]: 2025-03-17T18:49:00.259670Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:49:00.261225 waagent[1656]: 2025-03-17T18:49:00.261148Z INFO EnvHandler ExtHandler Configure routes Mar 17 18:49:00.261541 waagent[1656]: 2025-03-17T18:49:00.261470Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 17 18:49:00.261855 waagent[1656]: 2025-03-17T18:49:00.261782Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 17 18:49:00.262472 waagent[1656]: 2025-03-17T18:49:00.262396Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 17 18:49:00.262905 waagent[1656]: 2025-03-17T18:49:00.262835Z INFO EnvHandler ExtHandler Gateway:None Mar 17 18:49:00.264286 waagent[1656]: 2025-03-17T18:49:00.264217Z INFO EnvHandler ExtHandler Routes:None Mar 17 18:49:00.278250 waagent[1656]: 2025-03-17T18:49:00.278188Z INFO MonitorHandler ExtHandler Network interfaces: Mar 17 18:49:00.278250 waagent[1656]: Executing ['ip', '-a', '-o', 'link']: Mar 17 18:49:00.278250 waagent[1656]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 17 18:49:00.278250 waagent[1656]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:07:37:80 brd ff:ff:ff:ff:ff:ff Mar 17 18:49:00.278250 waagent[1656]: 3: enP17641s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:07:37:80 brd ff:ff:ff:ff:ff:ff\ altname enP17641p0s2 Mar 17 18:49:00.278250 waagent[1656]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 17 18:49:00.278250 waagent[1656]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 17 18:49:00.278250 waagent[1656]: 2: eth0 inet 10.200.20.36/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 17 18:49:00.278250 waagent[1656]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 17 18:49:00.278250 waagent[1656]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Mar 17 18:49:00.278250 waagent[1656]: 2: eth0 inet6 fe80::20d:3aff:fe07:3780/64 scope link \ valid_lft forever preferred_lft forever Mar 17 18:49:00.287634 waagent[1656]: 2025-03-17T18:49:00.287517Z INFO ExtHandler ExtHandler Downloading agent manifest Mar 17 18:49:00.320039 waagent[1656]: 2025-03-17T18:49:00.319918Z INFO ExtHandler ExtHandler Mar 17 18:49:00.320134 waagent[1656]: 2025-03-17T18:49:00.320084Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: b51b55ae-a3fe-491f-ba16-840426d21e95 correlation 47dcf822-fa87-47a8-8bd0-c769199f2242 created: 2025-03-17T18:47:31.721764Z] Mar 17 18:49:00.321000 waagent[1656]: 2025-03-17T18:49:00.320936Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 17 18:49:00.322936 waagent[1656]: 2025-03-17T18:49:00.322875Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Mar 17 18:49:00.354915 waagent[1656]: 2025-03-17T18:49:00.354840Z INFO ExtHandler ExtHandler Looking for existing remote access users. Mar 17 18:49:00.373343 waagent[1656]: 2025-03-17T18:49:00.373259Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 152C46E4-EB91-4C1D-B6DB-8CC42CAC4711;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Mar 17 18:49:00.508301 waagent[1656]: 2025-03-17T18:49:00.508165Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Mar 17 18:49:00.508301 waagent[1656]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:49:00.508301 waagent[1656]: pkts bytes target prot opt in out source destination Mar 17 18:49:00.508301 waagent[1656]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:49:00.508301 waagent[1656]: pkts bytes target prot opt in out source destination Mar 17 18:49:00.508301 waagent[1656]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:49:00.508301 waagent[1656]: pkts bytes target prot opt in out source destination Mar 17 18:49:00.508301 waagent[1656]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 17 18:49:00.508301 waagent[1656]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 18:49:00.508301 waagent[1656]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 18:49:00.516567 waagent[1656]: 2025-03-17T18:49:00.516461Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 17 18:49:00.516567 waagent[1656]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:49:00.516567 waagent[1656]: pkts bytes target prot opt in out source destination Mar 17 18:49:00.516567 waagent[1656]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:49:00.516567 waagent[1656]: pkts bytes target prot opt in out source destination Mar 17 18:49:00.516567 waagent[1656]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:49:00.516567 waagent[1656]: pkts bytes target prot opt in out source destination Mar 17 18:49:00.516567 waagent[1656]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 17 18:49:00.516567 waagent[1656]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 18:49:00.516567 waagent[1656]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 18:49:00.517341 waagent[1656]: 2025-03-17T18:49:00.517292Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 17 18:49:08.024769 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:49:08.024939 systemd[1]: Stopped kubelet.service. Mar 17 18:49:08.026264 systemd[1]: Starting kubelet.service... Mar 17 18:49:08.104885 systemd[1]: Started kubelet.service. Mar 17 18:49:08.152807 kubelet[1714]: E0317 18:49:08.152769 1714 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:08.154919 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:08.155051 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:18.274898 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 18:49:18.275073 systemd[1]: Stopped kubelet.service. Mar 17 18:49:18.276450 systemd[1]: Starting kubelet.service... Mar 17 18:49:18.552664 systemd[1]: Started kubelet.service. Mar 17 18:49:18.589362 kubelet[1724]: E0317 18:49:18.589323 1724 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:18.591454 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:18.591597 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:26.598322 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 17 18:49:28.774813 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 18:49:28.774984 systemd[1]: Stopped kubelet.service. Mar 17 18:49:28.776341 systemd[1]: Starting kubelet.service... Mar 17 18:49:29.059121 systemd[1]: Started kubelet.service. Mar 17 18:49:29.098667 kubelet[1735]: E0317 18:49:29.098615 1735 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:29.101030 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:29.101150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:32.512296 update_engine[1444]: I0317 18:49:32.512240 1444 update_attempter.cc:509] Updating boot flags... Mar 17 18:49:38.364169 systemd[1]: Created slice system-sshd.slice. Mar 17 18:49:38.365719 systemd[1]: Started sshd@0-10.200.20.36:22-10.200.16.10:52418.service. Mar 17 18:49:38.984803 sshd[1781]: Accepted publickey for core from 10.200.16.10 port 52418 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:49:39.022711 sshd[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:39.026594 systemd-logind[1442]: New session 3 of user core. Mar 17 18:49:39.027018 systemd[1]: Started session-3.scope. Mar 17 18:49:39.274800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 18:49:39.274962 systemd[1]: Stopped kubelet.service. Mar 17 18:49:39.276354 systemd[1]: Starting kubelet.service... Mar 17 18:49:39.383448 systemd[1]: Started sshd@1-10.200.20.36:22-10.200.16.10:52424.service. Mar 17 18:49:39.437746 systemd[1]: Started kubelet.service. Mar 17 18:49:39.479087 kubelet[1795]: E0317 18:49:39.479037 1795 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:39.481256 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:39.481379 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:39.823237 sshd[1791]: Accepted publickey for core from 10.200.16.10 port 52424 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:49:39.824503 sshd[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:39.828780 systemd[1]: Started session-4.scope. Mar 17 18:49:39.829104 systemd-logind[1442]: New session 4 of user core. Mar 17 18:49:40.154741 sshd[1791]: pam_unix(sshd:session): session closed for user core Mar 17 18:49:40.157321 systemd[1]: sshd@1-10.200.20.36:22-10.200.16.10:52424.service: Deactivated successfully. Mar 17 18:49:40.158022 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:49:40.158555 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:49:40.159507 systemd-logind[1442]: Removed session 4. Mar 17 18:49:40.233162 systemd[1]: Started sshd@2-10.200.20.36:22-10.200.16.10:52436.service. Mar 17 18:49:40.674785 sshd[1805]: Accepted publickey for core from 10.200.16.10 port 52436 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:49:40.676343 sshd[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:40.680466 systemd[1]: Started session-5.scope. Mar 17 18:49:40.681021 systemd-logind[1442]: New session 5 of user core. Mar 17 18:49:40.993941 sshd[1805]: pam_unix(sshd:session): session closed for user core Mar 17 18:49:40.996366 systemd[1]: sshd@2-10.200.20.36:22-10.200.16.10:52436.service: Deactivated successfully. Mar 17 18:49:40.997042 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:49:40.997536 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:49:40.998333 systemd-logind[1442]: Removed session 5. Mar 17 18:49:41.082710 systemd[1]: Started sshd@3-10.200.20.36:22-10.200.16.10:52448.service. Mar 17 18:49:41.561911 sshd[1811]: Accepted publickey for core from 10.200.16.10 port 52448 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:49:41.563471 sshd[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:41.567151 systemd-logind[1442]: New session 6 of user core. Mar 17 18:49:41.567524 systemd[1]: Started session-6.scope. Mar 17 18:49:41.911958 sshd[1811]: pam_unix(sshd:session): session closed for user core Mar 17 18:49:41.914619 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:49:41.915058 systemd[1]: sshd@3-10.200.20.36:22-10.200.16.10:52448.service: Deactivated successfully. Mar 17 18:49:41.915719 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:49:41.916398 systemd-logind[1442]: Removed session 6. Mar 17 18:49:41.994647 systemd[1]: Started sshd@4-10.200.20.36:22-10.200.16.10:52454.service. Mar 17 18:49:42.479057 sshd[1817]: Accepted publickey for core from 10.200.16.10 port 52454 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:49:42.480288 sshd[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:42.484108 systemd-logind[1442]: New session 7 of user core. Mar 17 18:49:42.484514 systemd[1]: Started session-7.scope. Mar 17 18:49:43.034944 sudo[1820]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:49:43.035152 sudo[1820]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:49:43.055240 systemd[1]: Starting docker.service... Mar 17 18:49:43.085896 env[1830]: time="2025-03-17T18:49:43.085853052Z" level=info msg="Starting up" Mar 17 18:49:43.087207 env[1830]: time="2025-03-17T18:49:43.087176245Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:49:43.087207 env[1830]: time="2025-03-17T18:49:43.087200325Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:49:43.087311 env[1830]: time="2025-03-17T18:49:43.087219525Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:49:43.087311 env[1830]: time="2025-03-17T18:49:43.087229285Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:49:43.088951 env[1830]: time="2025-03-17T18:49:43.088931595Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:49:43.089041 env[1830]: time="2025-03-17T18:49:43.089025715Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:49:43.089101 env[1830]: time="2025-03-17T18:49:43.089085714Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:49:43.089157 env[1830]: time="2025-03-17T18:49:43.089145594Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:49:43.097029 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2422234815-merged.mount: Deactivated successfully. Mar 17 18:49:43.196884 env[1830]: time="2025-03-17T18:49:43.196843985Z" level=info msg="Loading containers: start." Mar 17 18:49:43.404619 kernel: Initializing XFRM netlink socket Mar 17 18:49:43.424070 env[1830]: time="2025-03-17T18:49:43.424023661Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:49:43.596314 systemd-networkd[1607]: docker0: Link UP Mar 17 18:49:43.631634 env[1830]: time="2025-03-17T18:49:43.631600927Z" level=info msg="Loading containers: done." Mar 17 18:49:43.659194 env[1830]: time="2025-03-17T18:49:43.659158611Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:49:43.659515 env[1830]: time="2025-03-17T18:49:43.659497769Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:49:43.659704 env[1830]: time="2025-03-17T18:49:43.659688928Z" level=info msg="Daemon has completed initialization" Mar 17 18:49:43.700462 systemd[1]: Started docker.service. Mar 17 18:49:43.706889 env[1830]: time="2025-03-17T18:49:43.706726542Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:49:48.486693 env[1455]: time="2025-03-17T18:49:48.486383533Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 18:49:49.524798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 17 18:49:49.524963 systemd[1]: Stopped kubelet.service. Mar 17 18:49:49.526460 systemd[1]: Starting kubelet.service... Mar 17 18:49:49.609475 systemd[1]: Started kubelet.service. Mar 17 18:49:49.678872 kubelet[1954]: E0317 18:49:49.678832 1954 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:49.680925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:49.681049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:50.004229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1975651278.mount: Deactivated successfully. Mar 17 18:49:52.136499 env[1455]: time="2025-03-17T18:49:52.136430868Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:52.145933 env[1455]: time="2025-03-17T18:49:52.145898110Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:52.151350 env[1455]: time="2025-03-17T18:49:52.151314517Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:52.157656 env[1455]: time="2025-03-17T18:49:52.157624758Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:52.158511 env[1455]: time="2025-03-17T18:49:52.158481550Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Mar 17 18:49:52.167374 env[1455]: time="2025-03-17T18:49:52.167350249Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 18:49:54.779100 env[1455]: time="2025-03-17T18:49:54.779041013Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:54.787754 env[1455]: time="2025-03-17T18:49:54.787715366Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:54.793309 env[1455]: time="2025-03-17T18:49:54.793270487Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:54.801714 env[1455]: time="2025-03-17T18:49:54.801659990Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:54.802558 env[1455]: time="2025-03-17T18:49:54.802531742Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Mar 17 18:49:54.811537 env[1455]: time="2025-03-17T18:49:54.811501186Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 18:49:56.421835 env[1455]: time="2025-03-17T18:49:56.421785447Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:56.429649 env[1455]: time="2025-03-17T18:49:56.429619155Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:56.436717 env[1455]: time="2025-03-17T18:49:56.436688157Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:56.441413 env[1455]: time="2025-03-17T18:49:56.441385198Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:56.442183 env[1455]: time="2025-03-17T18:49:56.442154944Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Mar 17 18:49:56.450947 env[1455]: time="2025-03-17T18:49:56.450920684Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 18:49:57.620855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount397105998.mount: Deactivated successfully. Mar 17 18:49:58.124885 env[1455]: time="2025-03-17T18:49:58.124811713Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:58.135500 env[1455]: time="2025-03-17T18:49:58.135455458Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:58.141921 env[1455]: time="2025-03-17T18:49:58.141896907Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:58.147669 env[1455]: time="2025-03-17T18:49:58.147620933Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:58.148178 env[1455]: time="2025-03-17T18:49:58.148151950Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 17 18:49:58.156627 env[1455]: time="2025-03-17T18:49:58.156595504Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:49:58.882786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2692549980.mount: Deactivated successfully. Mar 17 18:49:59.774741 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 17 18:49:59.774907 systemd[1]: Stopped kubelet.service. Mar 17 18:49:59.776267 systemd[1]: Starting kubelet.service... Mar 17 18:50:00.228568 systemd[1]: Started kubelet.service. Mar 17 18:50:00.269879 kubelet[1985]: E0317 18:50:00.269824 1985 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:00.271998 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:00.272116 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:01.016761 env[1455]: time="2025-03-17T18:50:01.016713508Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:01.028440 env[1455]: time="2025-03-17T18:50:01.028401298Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:01.042110 env[1455]: time="2025-03-17T18:50:01.042072147Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:01.048765 env[1455]: time="2025-03-17T18:50:01.048730546Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:01.049416 env[1455]: time="2025-03-17T18:50:01.049383686Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 18:50:01.058504 env[1455]: time="2025-03-17T18:50:01.058459718Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 18:50:01.769193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2605248374.mount: Deactivated successfully. Mar 17 18:50:01.800219 env[1455]: time="2025-03-17T18:50:01.800173401Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:01.810915 env[1455]: time="2025-03-17T18:50:01.810875121Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:01.816421 env[1455]: time="2025-03-17T18:50:01.816380886Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:01.821136 env[1455]: time="2025-03-17T18:50:01.821094387Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:01.821715 env[1455]: time="2025-03-17T18:50:01.821687405Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Mar 17 18:50:01.830415 env[1455]: time="2025-03-17T18:50:01.830383545Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 18:50:02.476162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3923423842.mount: Deactivated successfully. Mar 17 18:50:06.650018 env[1455]: time="2025-03-17T18:50:06.649956961Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:06.660658 env[1455]: time="2025-03-17T18:50:06.659953864Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:06.664280 env[1455]: time="2025-03-17T18:50:06.664244217Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:06.670200 env[1455]: time="2025-03-17T18:50:06.670164092Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:06.671109 env[1455]: time="2025-03-17T18:50:06.671078156Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Mar 17 18:50:10.274787 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 17 18:50:10.274967 systemd[1]: Stopped kubelet.service. Mar 17 18:50:10.276308 systemd[1]: Starting kubelet.service... Mar 17 18:50:10.511743 systemd[1]: Started kubelet.service. Mar 17 18:50:10.566082 kubelet[2065]: E0317 18:50:10.565983 2065 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:10.567871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:10.567995 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:12.886044 systemd[1]: Stopped kubelet.service. Mar 17 18:50:12.888717 systemd[1]: Starting kubelet.service... Mar 17 18:50:12.913148 systemd[1]: Reloading. Mar 17 18:50:13.010520 /usr/lib/systemd/system-generators/torcx-generator[2096]: time="2025-03-17T18:50:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:50:13.010549 /usr/lib/systemd/system-generators/torcx-generator[2096]: time="2025-03-17T18:50:13Z" level=info msg="torcx already run" Mar 17 18:50:13.065505 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:50:13.065525 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:50:13.080994 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:50:13.167168 systemd[1]: Started kubelet.service. Mar 17 18:50:13.168859 systemd[1]: Stopping kubelet.service... Mar 17 18:50:13.169195 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:50:13.169372 systemd[1]: Stopped kubelet.service. Mar 17 18:50:13.171368 systemd[1]: Starting kubelet.service... Mar 17 18:50:13.322515 systemd[1]: Started kubelet.service. Mar 17 18:50:13.362923 kubelet[2163]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:50:13.363249 kubelet[2163]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:50:13.363298 kubelet[2163]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:50:13.363428 kubelet[2163]: I0317 18:50:13.363396 2163 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:50:13.901659 kubelet[2163]: I0317 18:50:13.901623 2163 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:50:13.901659 kubelet[2163]: I0317 18:50:13.901651 2163 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:50:13.901861 kubelet[2163]: I0317 18:50:13.901841 2163 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:50:13.913412 kubelet[2163]: E0317 18:50:13.913390 2163 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.36:6443: connect: connection refused Mar 17 18:50:13.913808 kubelet[2163]: I0317 18:50:13.913789 2163 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:50:13.927860 kubelet[2163]: I0317 18:50:13.927826 2163 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:50:13.929533 kubelet[2163]: I0317 18:50:13.929496 2163 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:50:13.929723 kubelet[2163]: I0317 18:50:13.929534 2163 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-a-ffee15dd16","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:50:13.929816 kubelet[2163]: I0317 18:50:13.929737 2163 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:50:13.929816 kubelet[2163]: I0317 18:50:13.929747 2163 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:50:13.929889 kubelet[2163]: I0317 18:50:13.929870 2163 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:50:13.930722 kubelet[2163]: I0317 18:50:13.930701 2163 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:50:13.930767 kubelet[2163]: I0317 18:50:13.930727 2163 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:50:13.930922 kubelet[2163]: I0317 18:50:13.930909 2163 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:50:13.930959 kubelet[2163]: I0317 18:50:13.930935 2163 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:50:13.932934 kubelet[2163]: I0317 18:50:13.932912 2163 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:50:13.933111 kubelet[2163]: I0317 18:50:13.933092 2163 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:50:13.933144 kubelet[2163]: W0317 18:50:13.933134 2163 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:50:13.933619 kubelet[2163]: I0317 18:50:13.933599 2163 server.go:1264] "Started kubelet" Mar 17 18:50:13.933766 kubelet[2163]: W0317 18:50:13.933723 2163 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-ffee15dd16&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Mar 17 18:50:13.933810 kubelet[2163]: E0317 18:50:13.933773 2163 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-ffee15dd16&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Mar 17 18:50:13.940473 kubelet[2163]: W0317 18:50:13.940430 2163 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Mar 17 18:50:13.940615 kubelet[2163]: E0317 18:50:13.940601 2163 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Mar 17 18:50:13.941523 kubelet[2163]: E0317 18:50:13.941424 2163 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.36:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-a-ffee15dd16.182dabb27be7b7e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-a-ffee15dd16,UID:ci-3510.3.7-a-ffee15dd16,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-a-ffee15dd16,},FirstTimestamp:2025-03-17 18:50:13.933561831 +0000 UTC m=+0.606993859,LastTimestamp:2025-03-17 18:50:13.933561831 +0000 UTC m=+0.606993859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-a-ffee15dd16,}" Mar 17 18:50:13.941816 kubelet[2163]: I0317 18:50:13.941773 2163 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:50:13.942122 kubelet[2163]: I0317 18:50:13.942102 2163 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:50:13.942257 kubelet[2163]: I0317 18:50:13.942235 2163 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:50:13.943150 kubelet[2163]: I0317 18:50:13.943127 2163 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:50:13.946038 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:50:13.947267 kubelet[2163]: I0317 18:50:13.947245 2163 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:50:13.947685 kubelet[2163]: E0317 18:50:13.947667 2163 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:50:13.948912 kubelet[2163]: E0317 18:50:13.948890 2163 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.7-a-ffee15dd16\" not found" Mar 17 18:50:13.949274 kubelet[2163]: I0317 18:50:13.949250 2163 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:50:13.949437 kubelet[2163]: I0317 18:50:13.949423 2163 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:50:13.950440 kubelet[2163]: I0317 18:50:13.950420 2163 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:50:13.950820 kubelet[2163]: W0317 18:50:13.950783 2163 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Mar 17 18:50:13.950931 kubelet[2163]: E0317 18:50:13.950919 2163 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Mar 17 18:50:13.951726 kubelet[2163]: E0317 18:50:13.951698 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-ffee15dd16?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="200ms" Mar 17 18:50:13.952389 kubelet[2163]: I0317 18:50:13.952370 2163 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:50:13.953378 kubelet[2163]: I0317 18:50:13.953361 2163 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:50:13.953476 kubelet[2163]: I0317 18:50:13.953465 2163 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:50:14.047455 kubelet[2163]: I0317 18:50:14.047399 2163 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:50:14.048381 kubelet[2163]: I0317 18:50:14.048345 2163 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:50:14.048463 kubelet[2163]: I0317 18:50:14.048387 2163 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:50:14.048463 kubelet[2163]: I0317 18:50:14.048408 2163 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:50:14.048463 kubelet[2163]: E0317 18:50:14.048451 2163 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:50:14.050000 kubelet[2163]: W0317 18:50:14.049968 2163 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Mar 17 18:50:14.050092 kubelet[2163]: E0317 18:50:14.050020 2163 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Mar 17 18:50:14.076352 kubelet[2163]: I0317 18:50:14.076333 2163 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:50:14.076468 kubelet[2163]: I0317 18:50:14.076457 2163 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:50:14.076538 kubelet[2163]: I0317 18:50:14.076350 2163 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:14.076607 kubelet[2163]: I0317 18:50:14.076596 2163 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:50:14.076893 kubelet[2163]: E0317 18:50:14.076839 2163 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:14.082396 kubelet[2163]: I0317 18:50:14.082380 2163 policy_none.go:49] "None policy: Start" Mar 17 18:50:14.083144 kubelet[2163]: I0317 18:50:14.083124 2163 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:50:14.083205 kubelet[2163]: I0317 18:50:14.083168 2163 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:50:14.095263 systemd[1]: Created slice kubepods.slice. Mar 17 18:50:14.099757 systemd[1]: Created slice kubepods-burstable.slice. Mar 17 18:50:14.102521 systemd[1]: Created slice kubepods-besteffort.slice. Mar 17 18:50:14.113309 kubelet[2163]: I0317 18:50:14.113283 2163 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:50:14.113468 kubelet[2163]: I0317 18:50:14.113423 2163 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:50:14.113544 kubelet[2163]: I0317 18:50:14.113528 2163 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:50:14.115436 kubelet[2163]: E0317 18:50:14.115411 2163 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-a-ffee15dd16\" not found" Mar 17 18:50:14.149310 kubelet[2163]: I0317 18:50:14.149267 2163 topology_manager.go:215] "Topology Admit Handler" podUID="6d9e81622c18646b8d989ba3a5649990" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:14.150671 kubelet[2163]: I0317 18:50:14.150643 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d9e81622c18646b8d989ba3a5649990-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-a-ffee15dd16\" (UID: \"6d9e81622c18646b8d989ba3a5649990\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:14.151223 kubelet[2163]: I0317 18:50:14.151207 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d9e81622c18646b8d989ba3a5649990-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-a-ffee15dd16\" (UID: \"6d9e81622c18646b8d989ba3a5649990\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:14.151387 kubelet[2163]: I0317 18:50:14.151370 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d9e81622c18646b8d989ba3a5649990-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-a-ffee15dd16\" (UID: \"6d9e81622c18646b8d989ba3a5649990\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:14.151473 kubelet[2163]: I0317 18:50:14.150890 2163 topology_manager.go:215] "Topology Admit Handler" podUID="91cc94e2703cf659f9d0a14977a9fd14" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:14.152778 kubelet[2163]: E0317 18:50:14.152689 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-ffee15dd16?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="400ms" Mar 17 18:50:14.154649 kubelet[2163]: I0317 18:50:14.154629 2163 topology_manager.go:215] "Topology Admit Handler" podUID="0000f7e146d197c3fe0a74ed58343655" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:14.159599 systemd[1]: Created slice kubepods-burstable-pod6d9e81622c18646b8d989ba3a5649990.slice. Mar 17 18:50:14.170285 systemd[1]: Created slice kubepods-burstable-pod91cc94e2703cf659f9d0a14977a9fd14.slice. Mar 17 18:50:14.174257 systemd[1]: Created slice kubepods-burstable-pod0000f7e146d197c3fe0a74ed58343655.slice. Mar 17 18:50:14.251898 kubelet[2163]: I0317 18:50:14.251861 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91cc94e2703cf659f9d0a14977a9fd14-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-a-ffee15dd16\" (UID: \"91cc94e2703cf659f9d0a14977a9fd14\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:14.251898 kubelet[2163]: I0317 18:50:14.251904 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/91cc94e2703cf659f9d0a14977a9fd14-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-a-ffee15dd16\" (UID: \"91cc94e2703cf659f9d0a14977a9fd14\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:14.252043 kubelet[2163]: I0317 18:50:14.251933 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91cc94e2703cf659f9d0a14977a9fd14-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-a-ffee15dd16\" (UID: \"91cc94e2703cf659f9d0a14977a9fd14\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:14.252043 kubelet[2163]: I0317 18:50:14.251961 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/91cc94e2703cf659f9d0a14977a9fd14-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-a-ffee15dd16\" (UID: \"91cc94e2703cf659f9d0a14977a9fd14\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:14.252043 kubelet[2163]: I0317 18:50:14.251978 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91cc94e2703cf659f9d0a14977a9fd14-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-a-ffee15dd16\" (UID: \"91cc94e2703cf659f9d0a14977a9fd14\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:14.252043 kubelet[2163]: I0317 18:50:14.251996 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0000f7e146d197c3fe0a74ed58343655-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-a-ffee15dd16\" (UID: \"0000f7e146d197c3fe0a74ed58343655\") " pod="kube-system/kube-scheduler-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:14.279330 kubelet[2163]: I0317 18:50:14.279308 2163 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:14.279744 kubelet[2163]: E0317 18:50:14.279719 2163 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:14.470739 env[1455]: time="2025-03-17T18:50:14.470633263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-a-ffee15dd16,Uid:6d9e81622c18646b8d989ba3a5649990,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:14.473805 env[1455]: time="2025-03-17T18:50:14.473700649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-a-ffee15dd16,Uid:91cc94e2703cf659f9d0a14977a9fd14,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:14.476660 env[1455]: time="2025-03-17T18:50:14.476618711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-a-ffee15dd16,Uid:0000f7e146d197c3fe0a74ed58343655,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:14.553477 kubelet[2163]: E0317 18:50:14.553442 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-ffee15dd16?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="800ms" Mar 17 18:50:14.681865 kubelet[2163]: I0317 18:50:14.681843 2163 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:14.682349 kubelet[2163]: E0317 18:50:14.682328 2163 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:15.141484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3219932476.mount: Deactivated successfully. Mar 17 18:50:15.176659 env[1455]: time="2025-03-17T18:50:15.176610196Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:15.200694 env[1455]: time="2025-03-17T18:50:15.200653501Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:15.209695 env[1455]: time="2025-03-17T18:50:15.209659290Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:15.210761 kubelet[2163]: W0317 18:50:15.210697 2163 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Mar 17 18:50:15.210761 kubelet[2163]: E0317 18:50:15.210737 2163 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Mar 17 18:50:15.215175 env[1455]: time="2025-03-17T18:50:15.215138725Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:15.224136 env[1455]: time="2025-03-17T18:50:15.224094993Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:15.233730 env[1455]: time="2025-03-17T18:50:15.233695994Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:15.238038 env[1455]: time="2025-03-17T18:50:15.238012045Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:15.242534 env[1455]: time="2025-03-17T18:50:15.242496979Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:15.261151 env[1455]: time="2025-03-17T18:50:15.261001048Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:15.267047 env[1455]: time="2025-03-17T18:50:15.267012854Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:15.281986 env[1455]: time="2025-03-17T18:50:15.281952807Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:15.287514 env[1455]: time="2025-03-17T18:50:15.287481283Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:15.334754 kubelet[2163]: W0317 18:50:15.334658 2163 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-ffee15dd16&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Mar 17 18:50:15.334754 kubelet[2163]: E0317 18:50:15.334734 2163 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-ffee15dd16&limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Mar 17 18:50:15.355848 kubelet[2163]: E0317 18:50:15.354955 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-ffee15dd16?timeout=10s\": dial tcp 10.200.20.36:6443: connect: connection refused" interval="1.6s" Mar 17 18:50:15.366003 env[1455]: time="2025-03-17T18:50:15.360326333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:15.366003 env[1455]: time="2025-03-17T18:50:15.360391294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:15.366003 env[1455]: time="2025-03-17T18:50:15.360403734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:15.366003 env[1455]: time="2025-03-17T18:50:15.360567138Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8f24aa97af275eba3046844a7d60739cf8af8ed096cf9b7485800fd22afad77 pid=2201 runtime=io.containerd.runc.v2 Mar 17 18:50:15.380262 systemd[1]: Started cri-containerd-d8f24aa97af275eba3046844a7d60739cf8af8ed096cf9b7485800fd22afad77.scope. Mar 17 18:50:15.400641 env[1455]: time="2025-03-17T18:50:15.400471855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:15.400641 env[1455]: time="2025-03-17T18:50:15.400550857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:15.407427 env[1455]: time="2025-03-17T18:50:15.407373680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:15.407825 env[1455]: time="2025-03-17T18:50:15.407779929Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/902e1fb7aa9c379e791c24434b236be02e74e9dc6792d08e690352e29addc2ea pid=2235 runtime=io.containerd.runc.v2 Mar 17 18:50:15.418319 env[1455]: time="2025-03-17T18:50:15.418267909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-a-ffee15dd16,Uid:6d9e81622c18646b8d989ba3a5649990,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8f24aa97af275eba3046844a7d60739cf8af8ed096cf9b7485800fd22afad77\"" Mar 17 18:50:15.425263 env[1455]: time="2025-03-17T18:50:15.425233455Z" level=info msg="CreateContainer within sandbox \"d8f24aa97af275eba3046844a7d60739cf8af8ed096cf9b7485800fd22afad77\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:50:15.432886 systemd[1]: Started cri-containerd-902e1fb7aa9c379e791c24434b236be02e74e9dc6792d08e690352e29addc2ea.scope. Mar 17 18:50:15.439675 kubelet[2163]: W0317 18:50:15.439548 2163 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Mar 17 18:50:15.439675 kubelet[2163]: E0317 18:50:15.439637 2163 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Mar 17 18:50:15.444792 env[1455]: time="2025-03-17T18:50:15.443858326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:15.444792 env[1455]: time="2025-03-17T18:50:15.443898727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:15.444792 env[1455]: time="2025-03-17T18:50:15.443909487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:15.444792 env[1455]: time="2025-03-17T18:50:15.444016450Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a33833da6b5c5bb6a8cb31b6960d318ef924e853f5057171e30ccbca0167e85d pid=2270 runtime=io.containerd.runc.v2 Mar 17 18:50:15.464411 systemd[1]: Started cri-containerd-a33833da6b5c5bb6a8cb31b6960d318ef924e853f5057171e30ccbca0167e85d.scope. Mar 17 18:50:15.473624 env[1455]: time="2025-03-17T18:50:15.472048518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-a-ffee15dd16,Uid:91cc94e2703cf659f9d0a14977a9fd14,Namespace:kube-system,Attempt:0,} returns sandbox id \"902e1fb7aa9c379e791c24434b236be02e74e9dc6792d08e690352e29addc2ea\"" Mar 17 18:50:15.481039 env[1455]: time="2025-03-17T18:50:15.480998826Z" level=info msg="CreateContainer within sandbox \"902e1fb7aa9c379e791c24434b236be02e74e9dc6792d08e690352e29addc2ea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:50:15.485224 kubelet[2163]: I0317 18:50:15.484938 2163 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:15.485314 kubelet[2163]: E0317 18:50:15.485286 2163 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.36:6443/api/v1/nodes\": dial tcp 10.200.20.36:6443: connect: connection refused" node="ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:15.486575 kubelet[2163]: W0317 18:50:15.486495 2163 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Mar 17 18:50:15.486575 kubelet[2163]: E0317 18:50:15.486558 2163 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.36:6443: connect: connection refused Mar 17 18:50:15.496631 env[1455]: time="2025-03-17T18:50:15.496577873Z" level=info msg="CreateContainer within sandbox \"d8f24aa97af275eba3046844a7d60739cf8af8ed096cf9b7485800fd22afad77\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7a29eb0b9269035a650e043455cbdde4e1bd55f9185ad88abbd3139190add779\"" Mar 17 18:50:15.497256 env[1455]: time="2025-03-17T18:50:15.497213806Z" level=info msg="StartContainer for \"7a29eb0b9269035a650e043455cbdde4e1bd55f9185ad88abbd3139190add779\"" Mar 17 18:50:15.500799 env[1455]: time="2025-03-17T18:50:15.500772681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-a-ffee15dd16,Uid:0000f7e146d197c3fe0a74ed58343655,Namespace:kube-system,Attempt:0,} returns sandbox id \"a33833da6b5c5bb6a8cb31b6960d318ef924e853f5057171e30ccbca0167e85d\"" Mar 17 18:50:15.504639 env[1455]: time="2025-03-17T18:50:15.504605641Z" level=info msg="CreateContainer within sandbox \"a33833da6b5c5bb6a8cb31b6960d318ef924e853f5057171e30ccbca0167e85d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:50:15.515198 systemd[1]: Started cri-containerd-7a29eb0b9269035a650e043455cbdde4e1bd55f9185ad88abbd3139190add779.scope. Mar 17 18:50:15.553811 env[1455]: time="2025-03-17T18:50:15.553761473Z" level=info msg="CreateContainer within sandbox \"902e1fb7aa9c379e791c24434b236be02e74e9dc6792d08e690352e29addc2ea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bf2d6b72e2160e3e3ba8aae2ec8bb02300e72fb8033d97034979e8cbd7bc8bd5\"" Mar 17 18:50:15.555455 env[1455]: time="2025-03-17T18:50:15.555430628Z" level=info msg="StartContainer for \"bf2d6b72e2160e3e3ba8aae2ec8bb02300e72fb8033d97034979e8cbd7bc8bd5\"" Mar 17 18:50:15.558718 env[1455]: time="2025-03-17T18:50:15.558677457Z" level=info msg="StartContainer for \"7a29eb0b9269035a650e043455cbdde4e1bd55f9185ad88abbd3139190add779\" returns successfully" Mar 17 18:50:15.577143 systemd[1]: Started cri-containerd-bf2d6b72e2160e3e3ba8aae2ec8bb02300e72fb8033d97034979e8cbd7bc8bd5.scope. Mar 17 18:50:15.582969 env[1455]: time="2025-03-17T18:50:15.582926126Z" level=info msg="CreateContainer within sandbox \"a33833da6b5c5bb6a8cb31b6960d318ef924e853f5057171e30ccbca0167e85d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"599e4c3aca4bebe5936930d72e8d53019801ab77695338379a94de8972a1ec7c\"" Mar 17 18:50:15.583744 env[1455]: time="2025-03-17T18:50:15.583715222Z" level=info msg="StartContainer for \"599e4c3aca4bebe5936930d72e8d53019801ab77695338379a94de8972a1ec7c\"" Mar 17 18:50:15.599372 systemd[1]: Started cri-containerd-599e4c3aca4bebe5936930d72e8d53019801ab77695338379a94de8972a1ec7c.scope. Mar 17 18:50:15.637641 env[1455]: time="2025-03-17T18:50:15.637593553Z" level=info msg="StartContainer for \"bf2d6b72e2160e3e3ba8aae2ec8bb02300e72fb8033d97034979e8cbd7bc8bd5\" returns successfully" Mar 17 18:50:15.655355 env[1455]: time="2025-03-17T18:50:15.655245884Z" level=info msg="StartContainer for \"599e4c3aca4bebe5936930d72e8d53019801ab77695338379a94de8972a1ec7c\" returns successfully" Mar 17 18:50:17.086745 kubelet[2163]: I0317 18:50:17.086706 2163 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:17.684283 kubelet[2163]: I0317 18:50:17.684248 2163 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:17.794251 kubelet[2163]: E0317 18:50:17.794217 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Mar 17 18:50:17.941664 kubelet[2163]: I0317 18:50:17.941565 2163 apiserver.go:52] "Watching apiserver" Mar 17 18:50:17.950368 kubelet[2163]: I0317 18:50:17.950340 2163 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:50:18.125686 kubelet[2163]: E0317 18:50:18.125637 2163 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.7-a-ffee15dd16\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:20.798092 kubelet[2163]: W0317 18:50:20.798045 2163 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:50:21.310693 systemd[1]: Reloading. Mar 17 18:50:21.362360 /usr/lib/systemd/system-generators/torcx-generator[2450]: time="2025-03-17T18:50:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:50:21.362394 /usr/lib/systemd/system-generators/torcx-generator[2450]: time="2025-03-17T18:50:21Z" level=info msg="torcx already run" Mar 17 18:50:21.451745 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:50:21.451912 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:50:21.467432 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:50:21.588890 systemd[1]: Stopping kubelet.service... Mar 17 18:50:21.607067 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:50:21.607277 systemd[1]: Stopped kubelet.service. Mar 17 18:50:21.609650 systemd[1]: Starting kubelet.service... Mar 17 18:50:21.716841 systemd[1]: Started kubelet.service. Mar 17 18:50:21.785281 kubelet[2514]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:50:21.785623 kubelet[2514]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:50:21.785672 kubelet[2514]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:50:21.785786 kubelet[2514]: I0317 18:50:21.785761 2514 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:50:21.791431 kubelet[2514]: I0317 18:50:21.791403 2514 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:50:21.791431 kubelet[2514]: I0317 18:50:21.791428 2514 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:50:21.791618 kubelet[2514]: I0317 18:50:21.791600 2514 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:50:21.792835 kubelet[2514]: I0317 18:50:21.792802 2514 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:50:21.795272 kubelet[2514]: I0317 18:50:21.795219 2514 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:50:21.801667 kubelet[2514]: I0317 18:50:21.801651 2514 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:50:21.801982 kubelet[2514]: I0317 18:50:21.801955 2514 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:50:21.802198 kubelet[2514]: I0317 18:50:21.802049 2514 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-a-ffee15dd16","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:50:21.802318 kubelet[2514]: I0317 18:50:21.802306 2514 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:50:21.802404 kubelet[2514]: I0317 18:50:21.802395 2514 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:50:21.802525 kubelet[2514]: I0317 18:50:21.802516 2514 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:50:21.802765 kubelet[2514]: I0317 18:50:21.802753 2514 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:50:21.802847 kubelet[2514]: I0317 18:50:21.802836 2514 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:50:21.802921 kubelet[2514]: I0317 18:50:21.802912 2514 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:50:21.802995 kubelet[2514]: I0317 18:50:21.802974 2514 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:50:21.808733 kubelet[2514]: I0317 18:50:21.808716 2514 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:50:21.808961 kubelet[2514]: I0317 18:50:21.808948 2514 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:50:21.809371 kubelet[2514]: I0317 18:50:21.809359 2514 server.go:1264] "Started kubelet" Mar 17 18:50:21.811182 kubelet[2514]: I0317 18:50:21.811166 2514 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:50:21.815229 kubelet[2514]: I0317 18:50:21.815201 2514 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:50:21.816060 kubelet[2514]: I0317 18:50:21.816045 2514 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:50:21.819473 kubelet[2514]: I0317 18:50:21.819432 2514 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:50:21.819749 kubelet[2514]: I0317 18:50:21.819734 2514 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:50:21.820864 kubelet[2514]: I0317 18:50:21.820849 2514 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:50:21.822560 kubelet[2514]: I0317 18:50:21.822543 2514 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:50:21.822790 kubelet[2514]: I0317 18:50:21.822776 2514 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:50:21.826749 kubelet[2514]: I0317 18:50:21.826726 2514 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:50:21.827561 kubelet[2514]: I0317 18:50:21.827544 2514 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:50:21.827696 kubelet[2514]: I0317 18:50:21.827686 2514 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:50:21.827759 kubelet[2514]: I0317 18:50:21.827750 2514 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:50:21.827844 kubelet[2514]: E0317 18:50:21.827831 2514 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:50:21.840798 kubelet[2514]: I0317 18:50:21.840180 2514 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:50:21.846800 kubelet[2514]: I0317 18:50:21.846771 2514 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:50:21.846800 kubelet[2514]: I0317 18:50:21.846792 2514 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:50:21.849090 kubelet[2514]: E0317 18:50:21.849065 2514 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:50:21.885959 kubelet[2514]: I0317 18:50:21.885916 2514 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:50:21.885959 kubelet[2514]: I0317 18:50:21.885934 2514 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:50:21.885959 kubelet[2514]: I0317 18:50:21.885953 2514 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:50:21.886133 kubelet[2514]: I0317 18:50:21.886088 2514 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:50:21.886133 kubelet[2514]: I0317 18:50:21.886100 2514 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:50:21.886133 kubelet[2514]: I0317 18:50:21.886118 2514 policy_none.go:49] "None policy: Start" Mar 17 18:50:21.886955 kubelet[2514]: I0317 18:50:21.886905 2514 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:50:21.886955 kubelet[2514]: I0317 18:50:21.886952 2514 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:50:21.887085 kubelet[2514]: I0317 18:50:21.887066 2514 state_mem.go:75] "Updated machine memory state" Mar 17 18:50:21.891251 kubelet[2514]: I0317 18:50:21.891221 2514 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:50:21.891412 kubelet[2514]: I0317 18:50:21.891370 2514 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:50:21.891486 kubelet[2514]: I0317 18:50:21.891468 2514 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:50:21.924100 kubelet[2514]: I0317 18:50:21.924062 2514 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:21.928262 kubelet[2514]: I0317 18:50:21.928217 2514 topology_manager.go:215] "Topology Admit Handler" podUID="6d9e81622c18646b8d989ba3a5649990" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:21.928341 kubelet[2514]: I0317 18:50:21.928330 2514 topology_manager.go:215] "Topology Admit Handler" podUID="91cc94e2703cf659f9d0a14977a9fd14" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:21.928374 kubelet[2514]: I0317 18:50:21.928365 2514 topology_manager.go:215] "Topology Admit Handler" podUID="0000f7e146d197c3fe0a74ed58343655" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:21.949570 kubelet[2514]: W0317 18:50:21.949548 2514 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:50:21.949769 kubelet[2514]: E0317 18:50:21.949752 2514 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.7-a-ffee15dd16\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:21.949868 kubelet[2514]: W0317 18:50:21.949553 2514 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:50:21.949984 kubelet[2514]: W0317 18:50:21.949871 2514 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:50:21.961019 kubelet[2514]: I0317 18:50:21.960992 2514 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:21.961120 kubelet[2514]: I0317 18:50:21.961079 2514 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:22.023355 kubelet[2514]: I0317 18:50:22.023323 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d9e81622c18646b8d989ba3a5649990-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-a-ffee15dd16\" (UID: \"6d9e81622c18646b8d989ba3a5649990\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:22.023518 kubelet[2514]: I0317 18:50:22.023503 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/91cc94e2703cf659f9d0a14977a9fd14-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-a-ffee15dd16\" (UID: \"91cc94e2703cf659f9d0a14977a9fd14\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:22.023649 kubelet[2514]: I0317 18:50:22.023632 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0000f7e146d197c3fe0a74ed58343655-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-a-ffee15dd16\" (UID: \"0000f7e146d197c3fe0a74ed58343655\") " pod="kube-system/kube-scheduler-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:22.023734 kubelet[2514]: I0317 18:50:22.023721 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d9e81622c18646b8d989ba3a5649990-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-a-ffee15dd16\" (UID: \"6d9e81622c18646b8d989ba3a5649990\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:22.023824 kubelet[2514]: I0317 18:50:22.023811 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d9e81622c18646b8d989ba3a5649990-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-a-ffee15dd16\" (UID: \"6d9e81622c18646b8d989ba3a5649990\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:22.023908 kubelet[2514]: I0317 18:50:22.023894 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91cc94e2703cf659f9d0a14977a9fd14-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-a-ffee15dd16\" (UID: \"91cc94e2703cf659f9d0a14977a9fd14\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:22.023990 kubelet[2514]: I0317 18:50:22.023977 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/91cc94e2703cf659f9d0a14977a9fd14-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-a-ffee15dd16\" (UID: \"91cc94e2703cf659f9d0a14977a9fd14\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:22.024063 kubelet[2514]: I0317 18:50:22.024051 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91cc94e2703cf659f9d0a14977a9fd14-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-a-ffee15dd16\" (UID: \"91cc94e2703cf659f9d0a14977a9fd14\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:22.024143 kubelet[2514]: I0317 18:50:22.024128 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91cc94e2703cf659f9d0a14977a9fd14-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-a-ffee15dd16\" (UID: \"91cc94e2703cf659f9d0a14977a9fd14\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-ffee15dd16" Mar 17 18:50:22.361606 sudo[2544]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:50:22.361825 sudo[2544]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 18:50:22.808300 kubelet[2514]: I0317 18:50:22.808273 2514 apiserver.go:52] "Watching apiserver" Mar 17 18:50:22.823490 kubelet[2514]: I0317 18:50:22.823454 2514 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:50:22.872663 sudo[2544]: pam_unix(sudo:session): session closed for user root Mar 17 18:50:22.936659 kubelet[2514]: I0317 18:50:22.936602 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-a-ffee15dd16" podStartSLOduration=1.9365592010000001 podStartE2EDuration="1.936559201s" podCreationTimestamp="2025-03-17 18:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:50:22.936332197 +0000 UTC m=+1.214376982" watchObservedRunningTime="2025-03-17 18:50:22.936559201 +0000 UTC m=+1.214603986" Mar 17 18:50:23.027786 kubelet[2514]: I0317 18:50:23.027723 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-a-ffee15dd16" podStartSLOduration=2.027707453 podStartE2EDuration="2.027707453s" podCreationTimestamp="2025-03-17 18:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:50:23.005339904 +0000 UTC m=+1.283384689" watchObservedRunningTime="2025-03-17 18:50:23.027707453 +0000 UTC m=+1.305752278" Mar 17 18:50:23.028109 kubelet[2514]: I0317 18:50:23.028081 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-a-ffee15dd16" podStartSLOduration=3.02807222 podStartE2EDuration="3.02807222s" podCreationTimestamp="2025-03-17 18:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:50:23.027700973 +0000 UTC m=+1.305745798" watchObservedRunningTime="2025-03-17 18:50:23.02807222 +0000 UTC m=+1.306117045" Mar 17 18:50:24.775713 sudo[1820]: pam_unix(sudo:session): session closed for user root Mar 17 18:50:24.870112 sshd[1817]: pam_unix(sshd:session): session closed for user core Mar 17 18:50:24.872263 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:50:24.872449 systemd[1]: session-7.scope: Consumed 7.843s CPU time. Mar 17 18:50:24.872864 systemd[1]: sshd@4-10.200.20.36:22-10.200.16.10:52454.service: Deactivated successfully. Mar 17 18:50:24.873927 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:50:24.874859 systemd-logind[1442]: Removed session 7. Mar 17 18:50:35.600760 kubelet[2514]: I0317 18:50:35.600730 2514 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:50:35.601498 env[1455]: time="2025-03-17T18:50:35.601455593Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:50:35.601951 kubelet[2514]: I0317 18:50:35.601932 2514 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:50:36.684863 kubelet[2514]: I0317 18:50:36.684821 2514 topology_manager.go:215] "Topology Admit Handler" podUID="f3935933-6552-4a43-aa50-dff067ae1e27" podNamespace="kube-system" podName="cilium-x7qm5" Mar 17 18:50:36.690141 kubelet[2514]: I0317 18:50:36.690070 2514 topology_manager.go:215] "Topology Admit Handler" podUID="cac0be5c-52da-4ff3-9d50-8f4715a04e58" podNamespace="kube-system" podName="kube-proxy-vmgm4" Mar 17 18:50:36.692936 systemd[1]: Created slice kubepods-burstable-podf3935933_6552_4a43_aa50_dff067ae1e27.slice. Mar 17 18:50:36.694742 kubelet[2514]: I0317 18:50:36.694723 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-lib-modules\") pod \"cilium-x7qm5\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " pod="kube-system/cilium-x7qm5" Mar 17 18:50:36.694859 kubelet[2514]: I0317 18:50:36.694843 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-hostproc\") pod \"cilium-x7qm5\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " pod="kube-system/cilium-x7qm5" Mar 17 18:50:36.694935 kubelet[2514]: I0317 18:50:36.694923 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cac0be5c-52da-4ff3-9d50-8f4715a04e58-xtables-lock\") pod \"kube-proxy-vmgm4\" (UID: \"cac0be5c-52da-4ff3-9d50-8f4715a04e58\") " pod="kube-system/kube-proxy-vmgm4" Mar 17 18:50:36.695008 kubelet[2514]: I0317 18:50:36.694996 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-bpf-maps\") pod \"cilium-x7qm5\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " pod="kube-system/cilium-x7qm5" Mar 17 18:50:36.695160 kubelet[2514]: I0317 18:50:36.695145 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-cni-path\") pod \"cilium-x7qm5\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " pod="kube-system/cilium-x7qm5" Mar 17 18:50:36.695248 kubelet[2514]: I0317 18:50:36.695235 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f3935933-6552-4a43-aa50-dff067ae1e27-cilium-config-path\") pod \"cilium-x7qm5\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " pod="kube-system/cilium-x7qm5" Mar 17 18:50:36.695318 kubelet[2514]: I0317 18:50:36.695307 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-etc-cni-netd\") pod \"cilium-x7qm5\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " pod="kube-system/cilium-x7qm5" Mar 17 18:50:36.695390 kubelet[2514]: I0317 18:50:36.695378 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f3935933-6552-4a43-aa50-dff067ae1e27-hubble-tls\") pod \"cilium-x7qm5\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " pod="kube-system/cilium-x7qm5" Mar 17 18:50:36.695465 kubelet[2514]: I0317 18:50:36.695445 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cac0be5c-52da-4ff3-9d50-8f4715a04e58-lib-modules\") pod \"kube-proxy-vmgm4\" (UID: \"cac0be5c-52da-4ff3-9d50-8f4715a04e58\") " pod="kube-system/kube-proxy-vmgm4" Mar 17 18:50:36.695539 kubelet[2514]: I0317 18:50:36.695527 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-cilium-cgroup\") pod \"cilium-x7qm5\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " pod="kube-system/cilium-x7qm5" Mar 17 18:50:36.695707 kubelet[2514]: I0317 18:50:36.695693 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-xtables-lock\") pod \"cilium-x7qm5\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " pod="kube-system/cilium-x7qm5" Mar 17 18:50:36.695797 kubelet[2514]: I0317 18:50:36.695784 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f3935933-6552-4a43-aa50-dff067ae1e27-clustermesh-secrets\") pod \"cilium-x7qm5\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " pod="kube-system/cilium-x7qm5" Mar 17 18:50:36.695870 kubelet[2514]: I0317 18:50:36.695858 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-host-proc-sys-kernel\") pod \"cilium-x7qm5\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " pod="kube-system/cilium-x7qm5" Mar 17 18:50:36.695934 kubelet[2514]: I0317 18:50:36.695922 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cac0be5c-52da-4ff3-9d50-8f4715a04e58-kube-proxy\") pod \"kube-proxy-vmgm4\" (UID: \"cac0be5c-52da-4ff3-9d50-8f4715a04e58\") " pod="kube-system/kube-proxy-vmgm4" Mar 17 18:50:36.696002 kubelet[2514]: I0317 18:50:36.695990 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-cilium-run\") pod \"cilium-x7qm5\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " pod="kube-system/cilium-x7qm5" Mar 17 18:50:36.696069 kubelet[2514]: I0317 18:50:36.696057 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-host-proc-sys-net\") pod \"cilium-x7qm5\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " pod="kube-system/cilium-x7qm5" Mar 17 18:50:36.696132 kubelet[2514]: I0317 18:50:36.696119 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t57v\" (UniqueName: \"kubernetes.io/projected/f3935933-6552-4a43-aa50-dff067ae1e27-kube-api-access-4t57v\") pod \"cilium-x7qm5\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " pod="kube-system/cilium-x7qm5" Mar 17 18:50:36.696198 kubelet[2514]: I0317 18:50:36.696187 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t94r\" (UniqueName: \"kubernetes.io/projected/cac0be5c-52da-4ff3-9d50-8f4715a04e58-kube-api-access-7t94r\") pod \"kube-proxy-vmgm4\" (UID: \"cac0be5c-52da-4ff3-9d50-8f4715a04e58\") " pod="kube-system/kube-proxy-vmgm4" Mar 17 18:50:36.702195 systemd[1]: Created slice kubepods-besteffort-podcac0be5c_52da_4ff3_9d50_8f4715a04e58.slice. Mar 17 18:50:36.997809 env[1455]: time="2025-03-17T18:50:36.997689954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x7qm5,Uid:f3935933-6552-4a43-aa50-dff067ae1e27,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:37.009635 env[1455]: time="2025-03-17T18:50:37.009322306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vmgm4,Uid:cac0be5c-52da-4ff3-9d50-8f4715a04e58,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:37.039510 kubelet[2514]: I0317 18:50:37.039464 2514 topology_manager.go:215] "Topology Admit Handler" podUID="3d72697b-6ec9-491e-b7e2-97b6a32b1119" podNamespace="kube-system" podName="cilium-operator-599987898-7d5ns" Mar 17 18:50:37.045234 systemd[1]: Created slice kubepods-besteffort-pod3d72697b_6ec9_491e_b7e2_97b6a32b1119.slice. Mar 17 18:50:37.067955 env[1455]: time="2025-03-17T18:50:37.067881105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:37.068100 env[1455]: time="2025-03-17T18:50:37.067939426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:37.068100 env[1455]: time="2025-03-17T18:50:37.067950386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:37.068269 env[1455]: time="2025-03-17T18:50:37.068225389Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2bad3908b0fbcea9b7facccb552c3362e8b85a4b783e6d67eac5ebfada23f974 pid=2607 runtime=io.containerd.runc.v2 Mar 17 18:50:37.069890 env[1455]: time="2025-03-17T18:50:37.069839170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:37.070039 env[1455]: time="2025-03-17T18:50:37.070015853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:37.070131 env[1455]: time="2025-03-17T18:50:37.070110414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:37.070437 env[1455]: time="2025-03-17T18:50:37.070374537Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0989055b5442d849bd9b3f56f49948352c0f2e006f4bed67d05d4d9c1639757a pid=2612 runtime=io.containerd.runc.v2 Mar 17 18:50:37.090029 systemd[1]: Started cri-containerd-2bad3908b0fbcea9b7facccb552c3362e8b85a4b783e6d67eac5ebfada23f974.scope. Mar 17 18:50:37.097513 systemd[1]: Started cri-containerd-0989055b5442d849bd9b3f56f49948352c0f2e006f4bed67d05d4d9c1639757a.scope. Mar 17 18:50:37.099892 kubelet[2514]: I0317 18:50:37.099798 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d72697b-6ec9-491e-b7e2-97b6a32b1119-cilium-config-path\") pod \"cilium-operator-599987898-7d5ns\" (UID: \"3d72697b-6ec9-491e-b7e2-97b6a32b1119\") " pod="kube-system/cilium-operator-599987898-7d5ns" Mar 17 18:50:37.099892 kubelet[2514]: I0317 18:50:37.099838 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4986s\" (UniqueName: \"kubernetes.io/projected/3d72697b-6ec9-491e-b7e2-97b6a32b1119-kube-api-access-4986s\") pod \"cilium-operator-599987898-7d5ns\" (UID: \"3d72697b-6ec9-491e-b7e2-97b6a32b1119\") " pod="kube-system/cilium-operator-599987898-7d5ns" Mar 17 18:50:37.132253 env[1455]: time="2025-03-17T18:50:37.132217459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x7qm5,Uid:f3935933-6552-4a43-aa50-dff067ae1e27,Namespace:kube-system,Attempt:0,} returns sandbox id \"0989055b5442d849bd9b3f56f49948352c0f2e006f4bed67d05d4d9c1639757a\"" Mar 17 18:50:37.134978 env[1455]: time="2025-03-17T18:50:37.133903561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vmgm4,Uid:cac0be5c-52da-4ff3-9d50-8f4715a04e58,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bad3908b0fbcea9b7facccb552c3362e8b85a4b783e6d67eac5ebfada23f974\"" Mar 17 18:50:37.139953 env[1455]: time="2025-03-17T18:50:37.139911559Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:50:37.141208 env[1455]: time="2025-03-17T18:50:37.141181255Z" level=info msg="CreateContainer within sandbox \"2bad3908b0fbcea9b7facccb552c3362e8b85a4b783e6d67eac5ebfada23f974\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:50:37.197318 env[1455]: time="2025-03-17T18:50:37.197264302Z" level=info msg="CreateContainer within sandbox \"2bad3908b0fbcea9b7facccb552c3362e8b85a4b783e6d67eac5ebfada23f974\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d90d5eb0af47acff1a444b53d39c28b1e57adadcc22638b7e96f6699e679271a\"" Mar 17 18:50:37.198069 env[1455]: time="2025-03-17T18:50:37.198029072Z" level=info msg="StartContainer for \"d90d5eb0af47acff1a444b53d39c28b1e57adadcc22638b7e96f6699e679271a\"" Mar 17 18:50:37.218641 systemd[1]: Started cri-containerd-d90d5eb0af47acff1a444b53d39c28b1e57adadcc22638b7e96f6699e679271a.scope. Mar 17 18:50:37.261692 env[1455]: time="2025-03-17T18:50:37.261575856Z" level=info msg="StartContainer for \"d90d5eb0af47acff1a444b53d39c28b1e57adadcc22638b7e96f6699e679271a\" returns successfully" Mar 17 18:50:37.348732 env[1455]: time="2025-03-17T18:50:37.348544783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7d5ns,Uid:3d72697b-6ec9-491e-b7e2-97b6a32b1119,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:37.405511 env[1455]: time="2025-03-17T18:50:37.405429160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:37.405511 env[1455]: time="2025-03-17T18:50:37.405469001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:37.405746 env[1455]: time="2025-03-17T18:50:37.405479081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:37.405877 env[1455]: time="2025-03-17T18:50:37.405829805Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d4aa9218a368fd2750bb56d1d810b8bd93cd3847434ac461c959e9b5ac62cfa pid=2748 runtime=io.containerd.runc.v2 Mar 17 18:50:37.415975 systemd[1]: Started cri-containerd-4d4aa9218a368fd2750bb56d1d810b8bd93cd3847434ac461c959e9b5ac62cfa.scope. Mar 17 18:50:37.457742 env[1455]: time="2025-03-17T18:50:37.457692158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7d5ns,Uid:3d72697b-6ec9-491e-b7e2-97b6a32b1119,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d4aa9218a368fd2750bb56d1d810b8bd93cd3847434ac461c959e9b5ac62cfa\"" Mar 17 18:50:41.842913 kubelet[2514]: I0317 18:50:41.842850 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vmgm4" podStartSLOduration=5.842835819 podStartE2EDuration="5.842835819s" podCreationTimestamp="2025-03-17 18:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:50:37.910991153 +0000 UTC m=+16.189035938" watchObservedRunningTime="2025-03-17 18:50:41.842835819 +0000 UTC m=+20.120880644" Mar 17 18:50:43.334950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177291355.mount: Deactivated successfully. Mar 17 18:50:45.568386 env[1455]: time="2025-03-17T18:50:45.568343819Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:45.578577 env[1455]: time="2025-03-17T18:50:45.578546693Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:45.586182 env[1455]: time="2025-03-17T18:50:45.586159218Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:45.586899 env[1455]: time="2025-03-17T18:50:45.586873026Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 18:50:45.589094 env[1455]: time="2025-03-17T18:50:45.588773447Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:50:45.590642 env[1455]: time="2025-03-17T18:50:45.590614347Z" level=info msg="CreateContainer within sandbox \"0989055b5442d849bd9b3f56f49948352c0f2e006f4bed67d05d4d9c1639757a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:50:45.625781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2350379308.mount: Deactivated successfully. Mar 17 18:50:45.631694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2863946355.mount: Deactivated successfully. Mar 17 18:50:45.644470 env[1455]: time="2025-03-17T18:50:45.644381349Z" level=info msg="CreateContainer within sandbox \"0989055b5442d849bd9b3f56f49948352c0f2e006f4bed67d05d4d9c1639757a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"65c20d26d2eb69a84ba9f7ebb77345135bd0a52ce01b2cde47a4c66e68791ec9\"" Mar 17 18:50:45.645010 env[1455]: time="2025-03-17T18:50:45.644986555Z" level=info msg="StartContainer for \"65c20d26d2eb69a84ba9f7ebb77345135bd0a52ce01b2cde47a4c66e68791ec9\"" Mar 17 18:50:45.664523 systemd[1]: Started cri-containerd-65c20d26d2eb69a84ba9f7ebb77345135bd0a52ce01b2cde47a4c66e68791ec9.scope. Mar 17 18:50:45.697747 systemd[1]: cri-containerd-65c20d26d2eb69a84ba9f7ebb77345135bd0a52ce01b2cde47a4c66e68791ec9.scope: Deactivated successfully. Mar 17 18:50:45.699455 env[1455]: time="2025-03-17T18:50:45.699410444Z" level=info msg="StartContainer for \"65c20d26d2eb69a84ba9f7ebb77345135bd0a52ce01b2cde47a4c66e68791ec9\" returns successfully" Mar 17 18:50:46.624705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65c20d26d2eb69a84ba9f7ebb77345135bd0a52ce01b2cde47a4c66e68791ec9-rootfs.mount: Deactivated successfully. Mar 17 18:50:47.426080 env[1455]: time="2025-03-17T18:50:47.426030712Z" level=info msg="shim disconnected" id=65c20d26d2eb69a84ba9f7ebb77345135bd0a52ce01b2cde47a4c66e68791ec9 Mar 17 18:50:47.426080 env[1455]: time="2025-03-17T18:50:47.426076113Z" level=warning msg="cleaning up after shim disconnected" id=65c20d26d2eb69a84ba9f7ebb77345135bd0a52ce01b2cde47a4c66e68791ec9 namespace=k8s.io Mar 17 18:50:47.426080 env[1455]: time="2025-03-17T18:50:47.426084633Z" level=info msg="cleaning up dead shim" Mar 17 18:50:47.434333 env[1455]: time="2025-03-17T18:50:47.434288441Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:50:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2925 runtime=io.containerd.runc.v2\n" Mar 17 18:50:47.921065 env[1455]: time="2025-03-17T18:50:47.921026698Z" level=info msg="CreateContainer within sandbox \"0989055b5442d849bd9b3f56f49948352c0f2e006f4bed67d05d4d9c1639757a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:50:47.998849 env[1455]: time="2025-03-17T18:50:47.998796378Z" level=info msg="CreateContainer within sandbox \"0989055b5442d849bd9b3f56f49948352c0f2e006f4bed67d05d4d9c1639757a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"05441f858a531921af6e09e846cca40c7e1a8380dab886ce66c5eb1b662ae884\"" Mar 17 18:50:47.999509 env[1455]: time="2025-03-17T18:50:47.999480265Z" level=info msg="StartContainer for \"05441f858a531921af6e09e846cca40c7e1a8380dab886ce66c5eb1b662ae884\"" Mar 17 18:50:48.026812 systemd[1]: Started cri-containerd-05441f858a531921af6e09e846cca40c7e1a8380dab886ce66c5eb1b662ae884.scope. Mar 17 18:50:48.056024 env[1455]: time="2025-03-17T18:50:48.055987186Z" level=info msg="StartContainer for \"05441f858a531921af6e09e846cca40c7e1a8380dab886ce66c5eb1b662ae884\" returns successfully" Mar 17 18:50:48.061297 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:50:48.061504 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:50:48.061789 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:50:48.063422 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:50:48.073018 systemd[1]: cri-containerd-05441f858a531921af6e09e846cca40c7e1a8380dab886ce66c5eb1b662ae884.scope: Deactivated successfully. Mar 17 18:50:48.074131 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:50:48.110357 env[1455]: time="2025-03-17T18:50:48.110314683Z" level=info msg="shim disconnected" id=05441f858a531921af6e09e846cca40c7e1a8380dab886ce66c5eb1b662ae884 Mar 17 18:50:48.110639 env[1455]: time="2025-03-17T18:50:48.110617886Z" level=warning msg="cleaning up after shim disconnected" id=05441f858a531921af6e09e846cca40c7e1a8380dab886ce66c5eb1b662ae884 namespace=k8s.io Mar 17 18:50:48.110737 env[1455]: time="2025-03-17T18:50:48.110722367Z" level=info msg="cleaning up dead shim" Mar 17 18:50:48.117355 env[1455]: time="2025-03-17T18:50:48.117325597Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:50:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2989 runtime=io.containerd.runc.v2\n" Mar 17 18:50:48.923982 env[1455]: time="2025-03-17T18:50:48.923705840Z" level=info msg="CreateContainer within sandbox \"0989055b5442d849bd9b3f56f49948352c0f2e006f4bed67d05d4d9c1639757a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:50:48.964155 systemd[1]: run-containerd-runc-k8s.io-05441f858a531921af6e09e846cca40c7e1a8380dab886ce66c5eb1b662ae884-runc.Ozdn8W.mount: Deactivated successfully. Mar 17 18:50:48.964273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05441f858a531921af6e09e846cca40c7e1a8380dab886ce66c5eb1b662ae884-rootfs.mount: Deactivated successfully. Mar 17 18:50:48.995322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2733646788.mount: Deactivated successfully. Mar 17 18:50:49.001520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1739172818.mount: Deactivated successfully. Mar 17 18:50:49.022849 env[1455]: time="2025-03-17T18:50:49.022810769Z" level=info msg="CreateContainer within sandbox \"0989055b5442d849bd9b3f56f49948352c0f2e006f4bed67d05d4d9c1639757a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8f15aa1ebf40968a36a785bb705b5f43e84c4da5c333ce71f2375b065a2eb2ce\"" Mar 17 18:50:49.023683 env[1455]: time="2025-03-17T18:50:49.023649338Z" level=info msg="StartContainer for \"8f15aa1ebf40968a36a785bb705b5f43e84c4da5c333ce71f2375b065a2eb2ce\"" Mar 17 18:50:49.045211 systemd[1]: Started cri-containerd-8f15aa1ebf40968a36a785bb705b5f43e84c4da5c333ce71f2375b065a2eb2ce.scope. Mar 17 18:50:49.089879 systemd[1]: cri-containerd-8f15aa1ebf40968a36a785bb705b5f43e84c4da5c333ce71f2375b065a2eb2ce.scope: Deactivated successfully. Mar 17 18:50:49.093259 env[1455]: time="2025-03-17T18:50:49.093028022Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3935933_6552_4a43_aa50_dff067ae1e27.slice/cri-containerd-8f15aa1ebf40968a36a785bb705b5f43e84c4da5c333ce71f2375b065a2eb2ce.scope/memory.events\": no such file or directory" Mar 17 18:50:49.099250 env[1455]: time="2025-03-17T18:50:49.099218807Z" level=info msg="StartContainer for \"8f15aa1ebf40968a36a785bb705b5f43e84c4da5c333ce71f2375b065a2eb2ce\" returns successfully" Mar 17 18:50:49.147436 env[1455]: time="2025-03-17T18:50:49.147376390Z" level=info msg="shim disconnected" id=8f15aa1ebf40968a36a785bb705b5f43e84c4da5c333ce71f2375b065a2eb2ce Mar 17 18:50:49.147716 env[1455]: time="2025-03-17T18:50:49.147696753Z" level=warning msg="cleaning up after shim disconnected" id=8f15aa1ebf40968a36a785bb705b5f43e84c4da5c333ce71f2375b065a2eb2ce namespace=k8s.io Mar 17 18:50:49.147783 env[1455]: time="2025-03-17T18:50:49.147769874Z" level=info msg="cleaning up dead shim" Mar 17 18:50:49.154428 env[1455]: time="2025-03-17T18:50:49.154399943Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:50:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3046 runtime=io.containerd.runc.v2\n" Mar 17 18:50:49.928560 env[1455]: time="2025-03-17T18:50:49.928520348Z" level=info msg="CreateContainer within sandbox \"0989055b5442d849bd9b3f56f49948352c0f2e006f4bed67d05d4d9c1639757a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:50:49.991456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1091886753.mount: Deactivated successfully. Mar 17 18:50:49.996427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3985811949.mount: Deactivated successfully. Mar 17 18:50:50.025022 env[1455]: time="2025-03-17T18:50:50.024971191Z" level=info msg="CreateContainer within sandbox \"0989055b5442d849bd9b3f56f49948352c0f2e006f4bed67d05d4d9c1639757a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"87e2f204da006c8db6e454770e45d547b7164a2894f9dba27ceb2b03df1ee62c\"" Mar 17 18:50:50.025740 env[1455]: time="2025-03-17T18:50:50.025714559Z" level=info msg="StartContainer for \"87e2f204da006c8db6e454770e45d547b7164a2894f9dba27ceb2b03df1ee62c\"" Mar 17 18:50:50.053556 systemd[1]: Started cri-containerd-87e2f204da006c8db6e454770e45d547b7164a2894f9dba27ceb2b03df1ee62c.scope. Mar 17 18:50:50.087501 systemd[1]: cri-containerd-87e2f204da006c8db6e454770e45d547b7164a2894f9dba27ceb2b03df1ee62c.scope: Deactivated successfully. Mar 17 18:50:50.088843 env[1455]: time="2025-03-17T18:50:50.088650486Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3935933_6552_4a43_aa50_dff067ae1e27.slice/cri-containerd-87e2f204da006c8db6e454770e45d547b7164a2894f9dba27ceb2b03df1ee62c.scope/memory.events\": no such file or directory" Mar 17 18:50:50.098752 env[1455]: time="2025-03-17T18:50:50.098707429Z" level=info msg="StartContainer for \"87e2f204da006c8db6e454770e45d547b7164a2894f9dba27ceb2b03df1ee62c\" returns successfully" Mar 17 18:50:50.366913 env[1455]: time="2025-03-17T18:50:50.366863424Z" level=info msg="shim disconnected" id=87e2f204da006c8db6e454770e45d547b7164a2894f9dba27ceb2b03df1ee62c Mar 17 18:50:50.366913 env[1455]: time="2025-03-17T18:50:50.366907185Z" level=warning msg="cleaning up after shim disconnected" id=87e2f204da006c8db6e454770e45d547b7164a2894f9dba27ceb2b03df1ee62c namespace=k8s.io Mar 17 18:50:50.366913 env[1455]: time="2025-03-17T18:50:50.366917345Z" level=info msg="cleaning up dead shim" Mar 17 18:50:50.373896 env[1455]: time="2025-03-17T18:50:50.373853856Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:50:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3102 runtime=io.containerd.runc.v2\n" Mar 17 18:50:50.438516 env[1455]: time="2025-03-17T18:50:50.438467800Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:50.446389 env[1455]: time="2025-03-17T18:50:50.446352881Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:50.450969 env[1455]: time="2025-03-17T18:50:50.450929928Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:50.456173 env[1455]: time="2025-03-17T18:50:50.456121981Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 18:50:50.460937 env[1455]: time="2025-03-17T18:50:50.460886030Z" level=info msg="CreateContainer within sandbox \"4d4aa9218a368fd2750bb56d1d810b8bd93cd3847434ac461c959e9b5ac62cfa\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:50:50.497398 env[1455]: time="2025-03-17T18:50:50.497355085Z" level=info msg="CreateContainer within sandbox \"4d4aa9218a368fd2750bb56d1d810b8bd93cd3847434ac461c959e9b5ac62cfa\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b\"" Mar 17 18:50:50.499295 env[1455]: time="2025-03-17T18:50:50.498764699Z" level=info msg="StartContainer for \"889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b\"" Mar 17 18:50:50.512893 systemd[1]: Started cri-containerd-889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b.scope. Mar 17 18:50:50.544441 env[1455]: time="2025-03-17T18:50:50.544398768Z" level=info msg="StartContainer for \"889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b\" returns successfully" Mar 17 18:50:50.931338 env[1455]: time="2025-03-17T18:50:50.931195142Z" level=info msg="CreateContainer within sandbox \"0989055b5442d849bd9b3f56f49948352c0f2e006f4bed67d05d4d9c1639757a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:50:50.977128 env[1455]: time="2025-03-17T18:50:50.977054693Z" level=info msg="CreateContainer within sandbox \"0989055b5442d849bd9b3f56f49948352c0f2e006f4bed67d05d4d9c1639757a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6\"" Mar 17 18:50:50.977749 env[1455]: time="2025-03-17T18:50:50.977724980Z" level=info msg="StartContainer for \"3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6\"" Mar 17 18:50:51.008594 systemd[1]: run-containerd-runc-k8s.io-3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6-runc.Hl94rc.mount: Deactivated successfully. Mar 17 18:50:51.010244 systemd[1]: Started cri-containerd-3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6.scope. Mar 17 18:50:51.078180 env[1455]: time="2025-03-17T18:50:51.078131239Z" level=info msg="StartContainer for \"3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6\" returns successfully" Mar 17 18:50:51.232620 kubelet[2514]: I0317 18:50:51.231866 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-7d5ns" podStartSLOduration=2.2338583 podStartE2EDuration="15.231849633s" podCreationTimestamp="2025-03-17 18:50:36 +0000 UTC" firstStartedPulling="2025-03-17 18:50:37.460118709 +0000 UTC m=+15.738163494" lastFinishedPulling="2025-03-17 18:50:50.458110002 +0000 UTC m=+28.736154827" observedRunningTime="2025-03-17 18:50:51.039202366 +0000 UTC m=+29.317247191" watchObservedRunningTime="2025-03-17 18:50:51.231849633 +0000 UTC m=+29.509894458" Mar 17 18:50:51.313612 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:50:51.336262 kubelet[2514]: I0317 18:50:51.336015 2514 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 18:50:51.587131 kubelet[2514]: I0317 18:50:51.587089 2514 topology_manager.go:215] "Topology Admit Handler" podUID="2be581ad-c1ed-4a89-866d-ddae4dd3c7b5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nckk2" Mar 17 18:50:51.590312 kubelet[2514]: I0317 18:50:51.590289 2514 topology_manager.go:215] "Topology Admit Handler" podUID="12166afa-b71c-4f69-823d-66f703ba4b82" podNamespace="kube-system" podName="coredns-7db6d8ff4d-r98q8" Mar 17 18:50:51.592302 systemd[1]: Created slice kubepods-burstable-pod2be581ad_c1ed_4a89_866d_ddae4dd3c7b5.slice. Mar 17 18:50:51.597112 systemd[1]: Created slice kubepods-burstable-pod12166afa_b71c_4f69_823d_66f703ba4b82.slice. Mar 17 18:50:51.690906 kubelet[2514]: I0317 18:50:51.690875 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12166afa-b71c-4f69-823d-66f703ba4b82-config-volume\") pod \"coredns-7db6d8ff4d-r98q8\" (UID: \"12166afa-b71c-4f69-823d-66f703ba4b82\") " pod="kube-system/coredns-7db6d8ff4d-r98q8" Mar 17 18:50:51.691144 kubelet[2514]: I0317 18:50:51.691116 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2be581ad-c1ed-4a89-866d-ddae4dd3c7b5-config-volume\") pod \"coredns-7db6d8ff4d-nckk2\" (UID: \"2be581ad-c1ed-4a89-866d-ddae4dd3c7b5\") " pod="kube-system/coredns-7db6d8ff4d-nckk2" Mar 17 18:50:51.691260 kubelet[2514]: I0317 18:50:51.691244 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fdg8\" (UniqueName: \"kubernetes.io/projected/2be581ad-c1ed-4a89-866d-ddae4dd3c7b5-kube-api-access-4fdg8\") pod \"coredns-7db6d8ff4d-nckk2\" (UID: \"2be581ad-c1ed-4a89-866d-ddae4dd3c7b5\") " pod="kube-system/coredns-7db6d8ff4d-nckk2" Mar 17 18:50:51.691368 kubelet[2514]: I0317 18:50:51.691354 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2sbw\" (UniqueName: \"kubernetes.io/projected/12166afa-b71c-4f69-823d-66f703ba4b82-kube-api-access-d2sbw\") pod \"coredns-7db6d8ff4d-r98q8\" (UID: \"12166afa-b71c-4f69-823d-66f703ba4b82\") " pod="kube-system/coredns-7db6d8ff4d-r98q8" Mar 17 18:50:51.894605 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:50:51.896761 env[1455]: time="2025-03-17T18:50:51.896714715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nckk2,Uid:2be581ad-c1ed-4a89-866d-ddae4dd3c7b5,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:51.900611 env[1455]: time="2025-03-17T18:50:51.900459153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r98q8,Uid:12166afa-b71c-4f69-823d-66f703ba4b82,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:51.969538 kubelet[2514]: I0317 18:50:51.969486 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x7qm5" podStartSLOduration=7.520221877 podStartE2EDuration="15.969470651s" podCreationTimestamp="2025-03-17 18:50:36 +0000 UTC" firstStartedPulling="2025-03-17 18:50:37.139010827 +0000 UTC m=+15.417055652" lastFinishedPulling="2025-03-17 18:50:45.588259601 +0000 UTC m=+23.866304426" observedRunningTime="2025-03-17 18:50:51.964167117 +0000 UTC m=+30.242211942" watchObservedRunningTime="2025-03-17 18:50:51.969470651 +0000 UTC m=+30.247515476" Mar 17 18:50:54.406317 systemd-networkd[1607]: cilium_host: Link UP Mar 17 18:50:54.406417 systemd-networkd[1607]: cilium_net: Link UP Mar 17 18:50:54.406420 systemd-networkd[1607]: cilium_net: Gained carrier Mar 17 18:50:54.406531 systemd-networkd[1607]: cilium_host: Gained carrier Mar 17 18:50:54.414443 systemd-networkd[1607]: cilium_host: Gained IPv6LL Mar 17 18:50:54.415131 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:50:54.658307 systemd-networkd[1607]: cilium_vxlan: Link UP Mar 17 18:50:54.658313 systemd-networkd[1607]: cilium_vxlan: Gained carrier Mar 17 18:50:54.730740 systemd-networkd[1607]: cilium_net: Gained IPv6LL Mar 17 18:50:54.893419 waagent[1656]: 2025-03-17T18:50:54.893339Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Mar 17 18:50:54.901137 waagent[1656]: 2025-03-17T18:50:54.901073Z INFO ExtHandler Mar 17 18:50:54.901309 waagent[1656]: 2025-03-17T18:50:54.901256Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f9d22a40-fad6-4b5b-b651-33eb5ed13949 eTag: 16615449343903943242 source: Fabric] Mar 17 18:50:54.902122 waagent[1656]: 2025-03-17T18:50:54.902059Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 17 18:50:54.903352 waagent[1656]: 2025-03-17T18:50:54.903287Z INFO ExtHandler Mar 17 18:50:54.903482 waagent[1656]: 2025-03-17T18:50:54.903436Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Mar 17 18:50:54.924613 kernel: NET: Registered PF_ALG protocol family Mar 17 18:50:54.981764 waagent[1656]: 2025-03-17T18:50:54.981701Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 17 18:50:55.070985 waagent[1656]: 2025-03-17T18:50:55.070827Z INFO ExtHandler Downloaded certificate {'thumbprint': '4B4C2F9F762367CFC53E29AA17BBDCE1C67FEE07', 'hasPrivateKey': True} Mar 17 18:50:55.072043 waagent[1656]: 2025-03-17T18:50:55.071963Z INFO ExtHandler Downloaded certificate {'thumbprint': '252FD62682B7ADCB1977719C51EAE11DBE1D43BE', 'hasPrivateKey': False} Mar 17 18:50:55.073223 waagent[1656]: 2025-03-17T18:50:55.073128Z INFO ExtHandler Fetch goal state completed Mar 17 18:50:55.074253 waagent[1656]: 2025-03-17T18:50:55.074163Z INFO ExtHandler ExtHandler VM enabled for RSM updates, switching to RSM update mode Mar 17 18:50:55.075590 waagent[1656]: 2025-03-17T18:50:55.075490Z INFO ExtHandler ExtHandler Mar 17 18:50:55.075748 waagent[1656]: 2025-03-17T18:50:55.075686Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 9262a5dc-8769-4b4a-bb39-9ed99f1c4409 correlation 47dcf822-fa87-47a8-8bd0-c769199f2242 created: 2025-03-17T18:50:44.457756Z] Mar 17 18:50:55.076550 waagent[1656]: 2025-03-17T18:50:55.076470Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 17 18:50:55.078522 waagent[1656]: 2025-03-17T18:50:55.078440Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 2 ms] Mar 17 18:50:55.768709 systemd-networkd[1607]: lxc_health: Link UP Mar 17 18:50:55.779628 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:50:55.779766 systemd-networkd[1607]: lxc_health: Gained carrier Mar 17 18:50:56.022369 systemd-networkd[1607]: lxc46f0c156e509: Link UP Mar 17 18:50:56.036795 systemd-networkd[1607]: lxc0945feb0123a: Link UP Mar 17 18:50:56.045625 kernel: eth0: renamed from tmpd2964 Mar 17 18:50:56.059613 kernel: eth0: renamed from tmp2d943 Mar 17 18:50:56.071956 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc46f0c156e509: link becomes ready Mar 17 18:50:56.080616 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0945feb0123a: link becomes ready Mar 17 18:50:56.081484 systemd-networkd[1607]: lxc46f0c156e509: Gained carrier Mar 17 18:50:56.081749 systemd-networkd[1607]: lxc0945feb0123a: Gained carrier Mar 17 18:50:56.274797 systemd-networkd[1607]: cilium_vxlan: Gained IPv6LL Mar 17 18:50:57.106726 systemd-networkd[1607]: lxc46f0c156e509: Gained IPv6LL Mar 17 18:50:57.490763 systemd-networkd[1607]: lxc0945feb0123a: Gained IPv6LL Mar 17 18:50:57.618734 systemd-networkd[1607]: lxc_health: Gained IPv6LL Mar 17 18:50:59.537695 env[1455]: time="2025-03-17T18:50:59.537565602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:59.537695 env[1455]: time="2025-03-17T18:50:59.537660803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:59.537695 env[1455]: time="2025-03-17T18:50:59.537672363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:59.538247 env[1455]: time="2025-03-17T18:50:59.538207927Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d943bc0b53ddc72061ea5cafcd78fdc7ae1de64436f84a213c0d912139f3185 pid=3701 runtime=io.containerd.runc.v2 Mar 17 18:50:59.549683 env[1455]: time="2025-03-17T18:50:59.549611550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:59.549803 env[1455]: time="2025-03-17T18:50:59.549710631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:59.549803 env[1455]: time="2025-03-17T18:50:59.549736791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:59.549897 env[1455]: time="2025-03-17T18:50:59.549866632Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2964d9eaf51f50d5472ecddb50ebe58ee98ba44adba865165f658419d3d727c pid=3719 runtime=io.containerd.runc.v2 Mar 17 18:50:59.569615 systemd[1]: run-containerd-runc-k8s.io-d2964d9eaf51f50d5472ecddb50ebe58ee98ba44adba865165f658419d3d727c-runc.YNLBwm.mount: Deactivated successfully. Mar 17 18:50:59.572713 systemd[1]: Started cri-containerd-d2964d9eaf51f50d5472ecddb50ebe58ee98ba44adba865165f658419d3d727c.scope. Mar 17 18:50:59.582414 systemd[1]: Started cri-containerd-2d943bc0b53ddc72061ea5cafcd78fdc7ae1de64436f84a213c0d912139f3185.scope. Mar 17 18:50:59.620761 env[1455]: time="2025-03-17T18:50:59.620713427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r98q8,Uid:12166afa-b71c-4f69-823d-66f703ba4b82,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d943bc0b53ddc72061ea5cafcd78fdc7ae1de64436f84a213c0d912139f3185\"" Mar 17 18:50:59.623511 env[1455]: time="2025-03-17T18:50:59.623467372Z" level=info msg="CreateContainer within sandbox \"2d943bc0b53ddc72061ea5cafcd78fdc7ae1de64436f84a213c0d912139f3185\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:50:59.642599 env[1455]: time="2025-03-17T18:50:59.642523623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nckk2,Uid:2be581ad-c1ed-4a89-866d-ddae4dd3c7b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2964d9eaf51f50d5472ecddb50ebe58ee98ba44adba865165f658419d3d727c\"" Mar 17 18:50:59.646986 env[1455]: time="2025-03-17T18:50:59.646948023Z" level=info msg="CreateContainer within sandbox \"d2964d9eaf51f50d5472ecddb50ebe58ee98ba44adba865165f658419d3d727c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:50:59.695304 env[1455]: time="2025-03-17T18:50:59.695258856Z" level=info msg="CreateContainer within sandbox \"2d943bc0b53ddc72061ea5cafcd78fdc7ae1de64436f84a213c0d912139f3185\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2526991b13b60dab44b2c63dbe21897e715751b31fcf5991d97a0451fdd12be9\"" Mar 17 18:50:59.695964 env[1455]: time="2025-03-17T18:50:59.695925942Z" level=info msg="StartContainer for \"2526991b13b60dab44b2c63dbe21897e715751b31fcf5991d97a0451fdd12be9\"" Mar 17 18:50:59.712728 systemd[1]: Started cri-containerd-2526991b13b60dab44b2c63dbe21897e715751b31fcf5991d97a0451fdd12be9.scope. Mar 17 18:50:59.732477 env[1455]: time="2025-03-17T18:50:59.731200458Z" level=info msg="CreateContainer within sandbox \"d2964d9eaf51f50d5472ecddb50ebe58ee98ba44adba865165f658419d3d727c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b0f55300da269c00b281276d832b08b6dfec3d180672a797742fd1d97da3ee95\"" Mar 17 18:50:59.732477 env[1455]: time="2025-03-17T18:50:59.731840504Z" level=info msg="StartContainer for \"b0f55300da269c00b281276d832b08b6dfec3d180672a797742fd1d97da3ee95\"" Mar 17 18:50:59.749487 systemd[1]: Started cri-containerd-b0f55300da269c00b281276d832b08b6dfec3d180672a797742fd1d97da3ee95.scope. Mar 17 18:50:59.763998 env[1455]: time="2025-03-17T18:50:59.763944552Z" level=info msg="StartContainer for \"2526991b13b60dab44b2c63dbe21897e715751b31fcf5991d97a0451fdd12be9\" returns successfully" Mar 17 18:50:59.785241 env[1455]: time="2025-03-17T18:50:59.785184062Z" level=info msg="StartContainer for \"b0f55300da269c00b281276d832b08b6dfec3d180672a797742fd1d97da3ee95\" returns successfully" Mar 17 18:50:59.972848 kubelet[2514]: I0317 18:50:59.972726 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-r98q8" podStartSLOduration=23.972708824 podStartE2EDuration="23.972708824s" podCreationTimestamp="2025-03-17 18:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:50:59.971477813 +0000 UTC m=+38.249522638" watchObservedRunningTime="2025-03-17 18:50:59.972708824 +0000 UTC m=+38.250753609" Mar 17 18:50:59.996303 kubelet[2514]: I0317 18:50:59.996255 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nckk2" podStartSLOduration=23.996237715 podStartE2EDuration="23.996237715s" podCreationTimestamp="2025-03-17 18:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:50:59.995054264 +0000 UTC m=+38.273099089" watchObservedRunningTime="2025-03-17 18:50:59.996237715 +0000 UTC m=+38.274282540" Mar 17 18:52:47.087162 systemd[1]: Started sshd@5-10.200.20.36:22-10.200.16.10:56758.service. Mar 17 18:52:47.566537 sshd[3879]: Accepted publickey for core from 10.200.16.10 port 56758 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:52:47.568321 sshd[3879]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:52:47.572706 systemd[1]: Started session-8.scope. Mar 17 18:52:47.573981 systemd-logind[1442]: New session 8 of user core. Mar 17 18:52:48.077346 sshd[3879]: pam_unix(sshd:session): session closed for user core Mar 17 18:52:48.080380 systemd[1]: sshd@5-10.200.20.36:22-10.200.16.10:56758.service: Deactivated successfully. Mar 17 18:52:48.081116 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:52:48.081430 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:52:48.082249 systemd-logind[1442]: Removed session 8. Mar 17 18:52:53.154523 systemd[1]: Started sshd@6-10.200.20.36:22-10.200.16.10:46494.service. Mar 17 18:52:53.595960 sshd[3896]: Accepted publickey for core from 10.200.16.10 port 46494 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:52:53.597623 sshd[3896]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:52:53.601946 systemd[1]: Started session-9.scope. Mar 17 18:52:53.602273 systemd-logind[1442]: New session 9 of user core. Mar 17 18:52:53.975930 sshd[3896]: pam_unix(sshd:session): session closed for user core Mar 17 18:52:53.978730 systemd[1]: sshd@6-10.200.20.36:22-10.200.16.10:46494.service: Deactivated successfully. Mar 17 18:52:53.979470 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:52:53.980019 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:52:53.980763 systemd-logind[1442]: Removed session 9. Mar 17 18:52:59.050264 systemd[1]: Started sshd@7-10.200.20.36:22-10.200.16.10:38710.service. Mar 17 18:52:59.492150 sshd[3908]: Accepted publickey for core from 10.200.16.10 port 38710 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:52:59.493755 sshd[3908]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:52:59.498038 systemd[1]: Started session-10.scope. Mar 17 18:52:59.498344 systemd-logind[1442]: New session 10 of user core. Mar 17 18:52:59.889174 sshd[3908]: pam_unix(sshd:session): session closed for user core Mar 17 18:52:59.891909 systemd[1]: sshd@7-10.200.20.36:22-10.200.16.10:38710.service: Deactivated successfully. Mar 17 18:52:59.892668 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:52:59.893194 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:52:59.893926 systemd-logind[1442]: Removed session 10. Mar 17 18:53:04.964754 systemd[1]: Started sshd@8-10.200.20.36:22-10.200.16.10:38716.service. Mar 17 18:53:05.406489 sshd[3920]: Accepted publickey for core from 10.200.16.10 port 38716 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:05.408120 sshd[3920]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:05.411519 systemd-logind[1442]: New session 11 of user core. Mar 17 18:53:05.414428 systemd[1]: Started session-11.scope. Mar 17 18:53:05.804630 sshd[3920]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:05.807282 systemd[1]: sshd@8-10.200.20.36:22-10.200.16.10:38716.service: Deactivated successfully. Mar 17 18:53:05.808042 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:53:05.808621 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:53:05.809478 systemd-logind[1442]: Removed session 11. Mar 17 18:53:05.892760 systemd[1]: Started sshd@9-10.200.20.36:22-10.200.16.10:38720.service. Mar 17 18:53:06.378080 sshd[3932]: Accepted publickey for core from 10.200.16.10 port 38720 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:06.379694 sshd[3932]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:06.384044 systemd[1]: Started session-12.scope. Mar 17 18:53:06.384750 systemd-logind[1442]: New session 12 of user core. Mar 17 18:53:06.846788 sshd[3932]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:06.850135 systemd[1]: sshd@9-10.200.20.36:22-10.200.16.10:38720.service: Deactivated successfully. Mar 17 18:53:06.850710 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:53:06.850937 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:53:06.851680 systemd-logind[1442]: Removed session 12. Mar 17 18:53:06.928719 systemd[1]: Started sshd@10-10.200.20.36:22-10.200.16.10:38724.service. Mar 17 18:53:07.414345 sshd[3942]: Accepted publickey for core from 10.200.16.10 port 38724 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:07.415947 sshd[3942]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:07.419835 systemd-logind[1442]: New session 13 of user core. Mar 17 18:53:07.420266 systemd[1]: Started session-13.scope. Mar 17 18:53:07.850292 sshd[3942]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:07.853094 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:53:07.853348 systemd[1]: sshd@10-10.200.20.36:22-10.200.16.10:38724.service: Deactivated successfully. Mar 17 18:53:07.854076 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:53:07.854789 systemd-logind[1442]: Removed session 13. Mar 17 18:53:12.925297 systemd[1]: Started sshd@11-10.200.20.36:22-10.200.16.10:36120.service. Mar 17 18:53:13.369830 sshd[3956]: Accepted publickey for core from 10.200.16.10 port 36120 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:13.371753 sshd[3956]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:13.376659 systemd[1]: Started session-14.scope. Mar 17 18:53:13.377003 systemd-logind[1442]: New session 14 of user core. Mar 17 18:53:13.769849 sshd[3956]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:13.772145 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:53:13.772834 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:53:13.773010 systemd[1]: sshd@11-10.200.20.36:22-10.200.16.10:36120.service: Deactivated successfully. Mar 17 18:53:13.774079 systemd-logind[1442]: Removed session 14. Mar 17 18:53:18.849644 systemd[1]: Started sshd@12-10.200.20.36:22-10.200.16.10:46942.service. Mar 17 18:53:19.328422 sshd[3967]: Accepted publickey for core from 10.200.16.10 port 46942 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:19.330059 sshd[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:19.334453 systemd[1]: Started session-15.scope. Mar 17 18:53:19.334784 systemd-logind[1442]: New session 15 of user core. Mar 17 18:53:19.753834 sshd[3967]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:19.756981 systemd[1]: sshd@12-10.200.20.36:22-10.200.16.10:46942.service: Deactivated successfully. Mar 17 18:53:19.757713 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:53:19.758247 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:53:19.759235 systemd-logind[1442]: Removed session 15. Mar 17 18:53:19.827737 systemd[1]: Started sshd@13-10.200.20.36:22-10.200.16.10:46958.service. Mar 17 18:53:20.270228 sshd[3978]: Accepted publickey for core from 10.200.16.10 port 46958 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:20.271476 sshd[3978]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:20.275493 systemd-logind[1442]: New session 16 of user core. Mar 17 18:53:20.275958 systemd[1]: Started session-16.scope. Mar 17 18:53:20.771140 sshd[3978]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:20.773985 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:53:20.774073 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:53:20.774837 systemd[1]: sshd@13-10.200.20.36:22-10.200.16.10:46958.service: Deactivated successfully. Mar 17 18:53:20.775935 systemd-logind[1442]: Removed session 16. Mar 17 18:53:20.856195 systemd[1]: Started sshd@14-10.200.20.36:22-10.200.16.10:46968.service. Mar 17 18:53:21.341285 sshd[3988]: Accepted publickey for core from 10.200.16.10 port 46968 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:21.342626 sshd[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:21.346636 systemd-logind[1442]: New session 17 of user core. Mar 17 18:53:21.347138 systemd[1]: Started session-17.scope. Mar 17 18:53:23.013776 sshd[3988]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:23.016870 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:53:23.017465 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:53:23.017566 systemd[1]: sshd@14-10.200.20.36:22-10.200.16.10:46968.service: Deactivated successfully. Mar 17 18:53:23.018625 systemd-logind[1442]: Removed session 17. Mar 17 18:53:23.102365 systemd[1]: Started sshd@15-10.200.20.36:22-10.200.16.10:46974.service. Mar 17 18:53:23.581988 sshd[4008]: Accepted publickey for core from 10.200.16.10 port 46974 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:23.583674 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:23.587976 systemd[1]: Started session-18.scope. Mar 17 18:53:23.588299 systemd-logind[1442]: New session 18 of user core. Mar 17 18:53:24.124727 sshd[4008]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:24.127811 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:53:24.129125 systemd[1]: sshd@15-10.200.20.36:22-10.200.16.10:46974.service: Deactivated successfully. Mar 17 18:53:24.129883 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:53:24.130564 systemd-logind[1442]: Removed session 18. Mar 17 18:53:24.198798 systemd[1]: Started sshd@16-10.200.20.36:22-10.200.16.10:46988.service. Mar 17 18:53:24.647362 sshd[4018]: Accepted publickey for core from 10.200.16.10 port 46988 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:24.648998 sshd[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:24.653531 systemd[1]: Started session-19.scope. Mar 17 18:53:24.654234 systemd-logind[1442]: New session 19 of user core. Mar 17 18:53:25.032259 sshd[4018]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:25.035378 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:53:25.035595 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:53:25.036185 systemd[1]: sshd@16-10.200.20.36:22-10.200.16.10:46988.service: Deactivated successfully. Mar 17 18:53:25.037260 systemd-logind[1442]: Removed session 19. Mar 17 18:53:30.107720 systemd[1]: Started sshd@17-10.200.20.36:22-10.200.16.10:33256.service. Mar 17 18:53:30.550667 sshd[4033]: Accepted publickey for core from 10.200.16.10 port 33256 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:30.552265 sshd[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:30.556467 systemd-logind[1442]: New session 20 of user core. Mar 17 18:53:30.557001 systemd[1]: Started session-20.scope. Mar 17 18:53:30.946705 sshd[4033]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:30.950317 systemd[1]: sshd@17-10.200.20.36:22-10.200.16.10:33256.service: Deactivated successfully. Mar 17 18:53:30.950613 systemd-logind[1442]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:53:30.951060 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:53:30.952051 systemd-logind[1442]: Removed session 20. Mar 17 18:53:36.021674 systemd[1]: Started sshd@18-10.200.20.36:22-10.200.16.10:33260.service. Mar 17 18:53:36.464014 sshd[4044]: Accepted publickey for core from 10.200.16.10 port 33260 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:36.465708 sshd[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:36.470098 systemd[1]: Started session-21.scope. Mar 17 18:53:36.471353 systemd-logind[1442]: New session 21 of user core. Mar 17 18:53:36.846692 sshd[4044]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:36.849192 systemd[1]: sshd@18-10.200.20.36:22-10.200.16.10:33260.service: Deactivated successfully. Mar 17 18:53:36.849963 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:53:36.850485 systemd-logind[1442]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:53:36.851238 systemd-logind[1442]: Removed session 21. Mar 17 18:53:41.922008 systemd[1]: Started sshd@19-10.200.20.36:22-10.200.16.10:36392.service. Mar 17 18:53:42.367137 sshd[4059]: Accepted publickey for core from 10.200.16.10 port 36392 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:42.368508 sshd[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:42.373791 systemd[1]: Started session-22.scope. Mar 17 18:53:42.374214 systemd-logind[1442]: New session 22 of user core. Mar 17 18:53:42.772759 sshd[4059]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:42.775855 systemd[1]: sshd@19-10.200.20.36:22-10.200.16.10:36392.service: Deactivated successfully. Mar 17 18:53:42.776034 systemd-logind[1442]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:53:42.776552 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:53:42.777354 systemd-logind[1442]: Removed session 22. Mar 17 18:53:47.846837 systemd[1]: Started sshd@20-10.200.20.36:22-10.200.16.10:36404.service. Mar 17 18:53:48.288017 sshd[4074]: Accepted publickey for core from 10.200.16.10 port 36404 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:48.289637 sshd[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:48.294346 systemd[1]: Started session-23.scope. Mar 17 18:53:48.294851 systemd-logind[1442]: New session 23 of user core. Mar 17 18:53:48.663482 sshd[4074]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:48.666210 systemd[1]: sshd@20-10.200.20.36:22-10.200.16.10:36404.service: Deactivated successfully. Mar 17 18:53:48.666988 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:53:48.667499 systemd-logind[1442]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:53:48.668227 systemd-logind[1442]: Removed session 23. Mar 17 18:53:48.745110 systemd[1]: Started sshd@21-10.200.20.36:22-10.200.16.10:47608.service. Mar 17 18:53:49.229768 sshd[4086]: Accepted publickey for core from 10.200.16.10 port 47608 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:49.231011 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:49.235261 systemd-logind[1442]: New session 24 of user core. Mar 17 18:53:49.236034 systemd[1]: Started session-24.scope. Mar 17 18:53:51.037341 env[1455]: time="2025-03-17T18:53:51.037290858Z" level=info msg="StopContainer for \"889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b\" with timeout 30 (s)" Mar 17 18:53:51.038146 env[1455]: time="2025-03-17T18:53:51.038108195Z" level=info msg="Stop container \"889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b\" with signal terminated" Mar 17 18:53:51.048479 env[1455]: time="2025-03-17T18:53:51.048413524Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:53:51.053465 systemd[1]: cri-containerd-889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b.scope: Deactivated successfully. Mar 17 18:53:51.054769 env[1455]: time="2025-03-17T18:53:51.054725052Z" level=info msg="StopContainer for \"3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6\" with timeout 2 (s)" Mar 17 18:53:51.055277 env[1455]: time="2025-03-17T18:53:51.055248303Z" level=info msg="Stop container \"3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6\" with signal terminated" Mar 17 18:53:51.062056 systemd-networkd[1607]: lxc_health: Link DOWN Mar 17 18:53:51.062062 systemd-networkd[1607]: lxc_health: Lost carrier Mar 17 18:53:51.078752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b-rootfs.mount: Deactivated successfully. Mar 17 18:53:51.085819 systemd[1]: cri-containerd-3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6.scope: Deactivated successfully. Mar 17 18:53:51.086108 systemd[1]: cri-containerd-3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6.scope: Consumed 6.120s CPU time. Mar 17 18:53:51.106117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6-rootfs.mount: Deactivated successfully. Mar 17 18:53:51.116898 env[1455]: time="2025-03-17T18:53:51.116854234Z" level=info msg="shim disconnected" id=889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b Mar 17 18:53:51.118426 env[1455]: time="2025-03-17T18:53:51.117000996Z" level=warning msg="cleaning up after shim disconnected" id=889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b namespace=k8s.io Mar 17 18:53:51.118426 env[1455]: time="2025-03-17T18:53:51.117014877Z" level=info msg="cleaning up dead shim" Mar 17 18:53:51.122267 env[1455]: time="2025-03-17T18:53:51.122229863Z" level=info msg="shim disconnected" id=3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6 Mar 17 18:53:51.122405 env[1455]: time="2025-03-17T18:53:51.122387946Z" level=warning msg="cleaning up after shim disconnected" id=3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6 namespace=k8s.io Mar 17 18:53:51.122486 env[1455]: time="2025-03-17T18:53:51.122471948Z" level=info msg="cleaning up dead shim" Mar 17 18:53:51.125417 env[1455]: time="2025-03-17T18:53:51.125382727Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:53:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4155 runtime=io.containerd.runc.v2\n" Mar 17 18:53:51.131202 env[1455]: time="2025-03-17T18:53:51.131159284Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:53:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4164 runtime=io.containerd.runc.v2\n" Mar 17 18:53:51.132485 env[1455]: time="2025-03-17T18:53:51.132455470Z" level=info msg="StopContainer for \"889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b\" returns successfully" Mar 17 18:53:51.133203 env[1455]: time="2025-03-17T18:53:51.133166085Z" level=info msg="StopPodSandbox for \"4d4aa9218a368fd2750bb56d1d810b8bd93cd3847434ac461c959e9b5ac62cfa\"" Mar 17 18:53:51.136189 env[1455]: time="2025-03-17T18:53:51.133231606Z" level=info msg="Container to stop \"889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:53:51.136780 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4d4aa9218a368fd2750bb56d1d810b8bd93cd3847434ac461c959e9b5ac62cfa-shm.mount: Deactivated successfully. Mar 17 18:53:51.137827 env[1455]: time="2025-03-17T18:53:51.137795819Z" level=info msg="StopContainer for \"3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6\" returns successfully" Mar 17 18:53:51.138404 env[1455]: time="2025-03-17T18:53:51.138382631Z" level=info msg="StopPodSandbox for \"0989055b5442d849bd9b3f56f49948352c0f2e006f4bed67d05d4d9c1639757a\"" Mar 17 18:53:51.138648 env[1455]: time="2025-03-17T18:53:51.138627956Z" level=info msg="Container to stop \"65c20d26d2eb69a84ba9f7ebb77345135bd0a52ce01b2cde47a4c66e68791ec9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:53:51.138760 env[1455]: time="2025-03-17T18:53:51.138739878Z" level=info msg="Container to stop \"87e2f204da006c8db6e454770e45d547b7164a2894f9dba27ceb2b03df1ee62c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:53:51.138829 env[1455]: time="2025-03-17T18:53:51.138812719Z" level=info msg="Container to stop \"05441f858a531921af6e09e846cca40c7e1a8380dab886ce66c5eb1b662ae884\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:53:51.138891 env[1455]: time="2025-03-17T18:53:51.138874961Z" level=info msg="Container to stop \"8f15aa1ebf40968a36a785bb705b5f43e84c4da5c333ce71f2375b065a2eb2ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:53:51.138960 env[1455]: time="2025-03-17T18:53:51.138943562Z" level=info msg="Container to stop \"3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:53:51.142117 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0989055b5442d849bd9b3f56f49948352c0f2e006f4bed67d05d4d9c1639757a-shm.mount: Deactivated successfully. Mar 17 18:53:51.147035 systemd[1]: cri-containerd-0989055b5442d849bd9b3f56f49948352c0f2e006f4bed67d05d4d9c1639757a.scope: Deactivated successfully. Mar 17 18:53:51.148999 systemd[1]: cri-containerd-4d4aa9218a368fd2750bb56d1d810b8bd93cd3847434ac461c959e9b5ac62cfa.scope: Deactivated successfully. Mar 17 18:53:51.190786 env[1455]: time="2025-03-17T18:53:51.190738174Z" level=info msg="shim disconnected" id=4d4aa9218a368fd2750bb56d1d810b8bd93cd3847434ac461c959e9b5ac62cfa Mar 17 18:53:51.191040 env[1455]: time="2025-03-17T18:53:51.191019659Z" level=warning msg="cleaning up after shim disconnected" id=4d4aa9218a368fd2750bb56d1d810b8bd93cd3847434ac461c959e9b5ac62cfa namespace=k8s.io Mar 17 18:53:51.191122 env[1455]: time="2025-03-17T18:53:51.191107581Z" level=info msg="cleaning up dead shim" Mar 17 18:53:51.191538 env[1455]: time="2025-03-17T18:53:51.191068980Z" level=info msg="shim disconnected" id=0989055b5442d849bd9b3f56f49948352c0f2e006f4bed67d05d4d9c1639757a Mar 17 18:53:51.191621 env[1455]: time="2025-03-17T18:53:51.191539750Z" level=warning msg="cleaning up after shim disconnected" id=0989055b5442d849bd9b3f56f49948352c0f2e006f4bed67d05d4d9c1639757a namespace=k8s.io Mar 17 18:53:51.191621 env[1455]: time="2025-03-17T18:53:51.191549030Z" level=info msg="cleaning up dead shim" Mar 17 18:53:51.198771 env[1455]: time="2025-03-17T18:53:51.198726456Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:53:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4222 runtime=io.containerd.runc.v2\n" Mar 17 18:53:51.199057 env[1455]: time="2025-03-17T18:53:51.199022262Z" level=info msg="TearDown network for sandbox \"0989055b5442d849bd9b3f56f49948352c0f2e006f4bed67d05d4d9c1639757a\" successfully" Mar 17 18:53:51.199057 env[1455]: time="2025-03-17T18:53:51.199048462Z" level=info msg="StopPodSandbox for \"0989055b5442d849bd9b3f56f49948352c0f2e006f4bed67d05d4d9c1639757a\" returns successfully" Mar 17 18:53:51.199700 env[1455]: time="2025-03-17T18:53:51.199509752Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:53:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4221 runtime=io.containerd.runc.v2\n" Mar 17 18:53:51.199805 env[1455]: time="2025-03-17T18:53:51.199772037Z" level=info msg="TearDown network for sandbox \"4d4aa9218a368fd2750bb56d1d810b8bd93cd3847434ac461c959e9b5ac62cfa\" successfully" Mar 17 18:53:51.199805 env[1455]: time="2025-03-17T18:53:51.199791958Z" level=info msg="StopPodSandbox for \"4d4aa9218a368fd2750bb56d1d810b8bd93cd3847434ac461c959e9b5ac62cfa\" returns successfully" Mar 17 18:53:51.283510 kubelet[2514]: I0317 18:53:51.283001 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f3935933-6552-4a43-aa50-dff067ae1e27" (UID: "f3935933-6552-4a43-aa50-dff067ae1e27"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:53:51.283510 kubelet[2514]: I0317 18:53:51.282941 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-host-proc-sys-net\") pod \"f3935933-6552-4a43-aa50-dff067ae1e27\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " Mar 17 18:53:51.283510 kubelet[2514]: I0317 18:53:51.283077 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4986s\" (UniqueName: \"kubernetes.io/projected/3d72697b-6ec9-491e-b7e2-97b6a32b1119-kube-api-access-4986s\") pod \"3d72697b-6ec9-491e-b7e2-97b6a32b1119\" (UID: \"3d72697b-6ec9-491e-b7e2-97b6a32b1119\") " Mar 17 18:53:51.283510 kubelet[2514]: I0317 18:53:51.283097 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f3935933-6552-4a43-aa50-dff067ae1e27-cilium-config-path\") pod \"f3935933-6552-4a43-aa50-dff067ae1e27\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " Mar 17 18:53:51.283510 kubelet[2514]: I0317 18:53:51.283113 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-etc-cni-netd\") pod \"f3935933-6552-4a43-aa50-dff067ae1e27\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " Mar 17 18:53:51.283510 kubelet[2514]: I0317 18:53:51.283413 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f3935933-6552-4a43-aa50-dff067ae1e27-hubble-tls\") pod \"f3935933-6552-4a43-aa50-dff067ae1e27\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " Mar 17 18:53:51.284003 kubelet[2514]: I0317 18:53:51.283435 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-cilium-cgroup\") pod \"f3935933-6552-4a43-aa50-dff067ae1e27\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " Mar 17 18:53:51.284003 kubelet[2514]: I0317 18:53:51.283449 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-lib-modules\") pod \"f3935933-6552-4a43-aa50-dff067ae1e27\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " Mar 17 18:53:51.284003 kubelet[2514]: I0317 18:53:51.283463 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-host-proc-sys-kernel\") pod \"f3935933-6552-4a43-aa50-dff067ae1e27\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " Mar 17 18:53:51.284003 kubelet[2514]: I0317 18:53:51.283480 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d72697b-6ec9-491e-b7e2-97b6a32b1119-cilium-config-path\") pod \"3d72697b-6ec9-491e-b7e2-97b6a32b1119\" (UID: \"3d72697b-6ec9-491e-b7e2-97b6a32b1119\") " Mar 17 18:53:51.285338 kubelet[2514]: I0317 18:53:51.284152 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-cni-path\") pod \"f3935933-6552-4a43-aa50-dff067ae1e27\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " Mar 17 18:53:51.285338 kubelet[2514]: I0317 18:53:51.284181 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-xtables-lock\") pod \"f3935933-6552-4a43-aa50-dff067ae1e27\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " Mar 17 18:53:51.285338 kubelet[2514]: I0317 18:53:51.284196 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-hostproc\") pod \"f3935933-6552-4a43-aa50-dff067ae1e27\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " Mar 17 18:53:51.285338 kubelet[2514]: I0317 18:53:51.284219 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t57v\" (UniqueName: \"kubernetes.io/projected/f3935933-6552-4a43-aa50-dff067ae1e27-kube-api-access-4t57v\") pod \"f3935933-6552-4a43-aa50-dff067ae1e27\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " Mar 17 18:53:51.285338 kubelet[2514]: I0317 18:53:51.284240 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f3935933-6552-4a43-aa50-dff067ae1e27-clustermesh-secrets\") pod \"f3935933-6552-4a43-aa50-dff067ae1e27\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " Mar 17 18:53:51.285338 kubelet[2514]: I0317 18:53:51.284255 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-cilium-run\") pod \"f3935933-6552-4a43-aa50-dff067ae1e27\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " Mar 17 18:53:51.285540 kubelet[2514]: I0317 18:53:51.284269 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-bpf-maps\") pod \"f3935933-6552-4a43-aa50-dff067ae1e27\" (UID: \"f3935933-6552-4a43-aa50-dff067ae1e27\") " Mar 17 18:53:51.285540 kubelet[2514]: I0317 18:53:51.284303 2514 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-host-proc-sys-net\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:51.285540 kubelet[2514]: I0317 18:53:51.284329 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f3935933-6552-4a43-aa50-dff067ae1e27" (UID: "f3935933-6552-4a43-aa50-dff067ae1e27"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:53:51.285540 kubelet[2514]: I0317 18:53:51.285437 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-cni-path" (OuterVolumeSpecName: "cni-path") pod "f3935933-6552-4a43-aa50-dff067ae1e27" (UID: "f3935933-6552-4a43-aa50-dff067ae1e27"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:53:51.285540 kubelet[2514]: I0317 18:53:51.285466 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f3935933-6552-4a43-aa50-dff067ae1e27" (UID: "f3935933-6552-4a43-aa50-dff067ae1e27"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:53:51.285674 kubelet[2514]: I0317 18:53:51.285480 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-hostproc" (OuterVolumeSpecName: "hostproc") pod "f3935933-6552-4a43-aa50-dff067ae1e27" (UID: "f3935933-6552-4a43-aa50-dff067ae1e27"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:53:51.287150 kubelet[2514]: I0317 18:53:51.287103 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f3935933-6552-4a43-aa50-dff067ae1e27" (UID: "f3935933-6552-4a43-aa50-dff067ae1e27"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:53:51.289565 kubelet[2514]: I0317 18:53:51.287638 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f3935933-6552-4a43-aa50-dff067ae1e27" (UID: "f3935933-6552-4a43-aa50-dff067ae1e27"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:53:51.290739 kubelet[2514]: I0317 18:53:51.290707 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3935933-6552-4a43-aa50-dff067ae1e27-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f3935933-6552-4a43-aa50-dff067ae1e27" (UID: "f3935933-6552-4a43-aa50-dff067ae1e27"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:53:51.290830 kubelet[2514]: I0317 18:53:51.290806 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d72697b-6ec9-491e-b7e2-97b6a32b1119-kube-api-access-4986s" (OuterVolumeSpecName: "kube-api-access-4986s") pod "3d72697b-6ec9-491e-b7e2-97b6a32b1119" (UID: "3d72697b-6ec9-491e-b7e2-97b6a32b1119"). InnerVolumeSpecName "kube-api-access-4986s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:53:51.290883 kubelet[2514]: I0317 18:53:51.290863 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3935933-6552-4a43-aa50-dff067ae1e27-kube-api-access-4t57v" (OuterVolumeSpecName: "kube-api-access-4t57v") pod "f3935933-6552-4a43-aa50-dff067ae1e27" (UID: "f3935933-6552-4a43-aa50-dff067ae1e27"). InnerVolumeSpecName "kube-api-access-4t57v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:53:51.292340 kubelet[2514]: I0317 18:53:51.292296 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d72697b-6ec9-491e-b7e2-97b6a32b1119-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3d72697b-6ec9-491e-b7e2-97b6a32b1119" (UID: "3d72697b-6ec9-491e-b7e2-97b6a32b1119"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:53:51.292413 kubelet[2514]: I0317 18:53:51.292394 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3935933-6552-4a43-aa50-dff067ae1e27-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f3935933-6552-4a43-aa50-dff067ae1e27" (UID: "f3935933-6552-4a43-aa50-dff067ae1e27"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:53:51.292442 kubelet[2514]: I0317 18:53:51.292426 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f3935933-6552-4a43-aa50-dff067ae1e27" (UID: "f3935933-6552-4a43-aa50-dff067ae1e27"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:53:51.292467 kubelet[2514]: I0317 18:53:51.292442 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f3935933-6552-4a43-aa50-dff067ae1e27" (UID: "f3935933-6552-4a43-aa50-dff067ae1e27"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:53:51.292467 kubelet[2514]: I0317 18:53:51.292458 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f3935933-6552-4a43-aa50-dff067ae1e27" (UID: "f3935933-6552-4a43-aa50-dff067ae1e27"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:53:51.294915 kubelet[2514]: I0317 18:53:51.294571 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3935933-6552-4a43-aa50-dff067ae1e27-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f3935933-6552-4a43-aa50-dff067ae1e27" (UID: "f3935933-6552-4a43-aa50-dff067ae1e27"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:53:51.296780 kubelet[2514]: I0317 18:53:51.296761 2514 scope.go:117] "RemoveContainer" containerID="3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6" Mar 17 18:53:51.302595 systemd[1]: Removed slice kubepods-burstable-podf3935933_6552_4a43_aa50_dff067ae1e27.slice. Mar 17 18:53:51.303674 env[1455]: time="2025-03-17T18:53:51.302858850Z" level=info msg="RemoveContainer for \"3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6\"" Mar 17 18:53:51.302694 systemd[1]: kubepods-burstable-podf3935933_6552_4a43_aa50_dff067ae1e27.slice: Consumed 6.206s CPU time. Mar 17 18:53:51.313478 systemd[1]: Removed slice kubepods-besteffort-pod3d72697b_6ec9_491e_b7e2_97b6a32b1119.slice. Mar 17 18:53:51.321624 env[1455]: time="2025-03-17T18:53:51.321575550Z" level=info msg="RemoveContainer for \"3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6\" returns successfully" Mar 17 18:53:51.321875 kubelet[2514]: I0317 18:53:51.321854 2514 scope.go:117] "RemoveContainer" containerID="87e2f204da006c8db6e454770e45d547b7164a2894f9dba27ceb2b03df1ee62c" Mar 17 18:53:51.324425 env[1455]: time="2025-03-17T18:53:51.324288045Z" level=info msg="RemoveContainer for \"87e2f204da006c8db6e454770e45d547b7164a2894f9dba27ceb2b03df1ee62c\"" Mar 17 18:53:51.338414 env[1455]: time="2025-03-17T18:53:51.338369131Z" level=info msg="RemoveContainer for \"87e2f204da006c8db6e454770e45d547b7164a2894f9dba27ceb2b03df1ee62c\" returns successfully" Mar 17 18:53:51.338982 kubelet[2514]: I0317 18:53:51.338959 2514 scope.go:117] "RemoveContainer" containerID="8f15aa1ebf40968a36a785bb705b5f43e84c4da5c333ce71f2375b065a2eb2ce" Mar 17 18:53:51.340917 env[1455]: time="2025-03-17T18:53:51.340881582Z" level=info msg="RemoveContainer for \"8f15aa1ebf40968a36a785bb705b5f43e84c4da5c333ce71f2375b065a2eb2ce\"" Mar 17 18:53:51.376123 env[1455]: time="2025-03-17T18:53:51.376080257Z" level=info msg="RemoveContainer for \"8f15aa1ebf40968a36a785bb705b5f43e84c4da5c333ce71f2375b065a2eb2ce\" returns successfully" Mar 17 18:53:51.376346 kubelet[2514]: I0317 18:53:51.376313 2514 scope.go:117] "RemoveContainer" containerID="05441f858a531921af6e09e846cca40c7e1a8380dab886ce66c5eb1b662ae884" Mar 17 18:53:51.377469 env[1455]: time="2025-03-17T18:53:51.377439685Z" level=info msg="RemoveContainer for \"05441f858a531921af6e09e846cca40c7e1a8380dab886ce66c5eb1b662ae884\"" Mar 17 18:53:51.384996 kubelet[2514]: I0317 18:53:51.384970 2514 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4986s\" (UniqueName: \"kubernetes.io/projected/3d72697b-6ec9-491e-b7e2-97b6a32b1119-kube-api-access-4986s\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:51.385117 kubelet[2514]: I0317 18:53:51.385104 2514 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f3935933-6552-4a43-aa50-dff067ae1e27-cilium-config-path\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:51.385192 kubelet[2514]: I0317 18:53:51.385182 2514 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-etc-cni-netd\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:51.385267 kubelet[2514]: I0317 18:53:51.385257 2514 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-cilium-cgroup\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:51.385339 kubelet[2514]: I0317 18:53:51.385317 2514 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-lib-modules\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:51.385411 kubelet[2514]: I0317 18:53:51.385391 2514 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f3935933-6552-4a43-aa50-dff067ae1e27-hubble-tls\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:51.385480 kubelet[2514]: I0317 18:53:51.385470 2514 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-cni-path\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:51.385548 kubelet[2514]: I0317 18:53:51.385527 2514 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-xtables-lock\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:51.385636 kubelet[2514]: I0317 18:53:51.385626 2514 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-host-proc-sys-kernel\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:51.385715 kubelet[2514]: I0317 18:53:51.385704 2514 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d72697b-6ec9-491e-b7e2-97b6a32b1119-cilium-config-path\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:51.385787 kubelet[2514]: I0317 18:53:51.385776 2514 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-hostproc\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:51.385864 kubelet[2514]: I0317 18:53:51.385838 2514 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4t57v\" (UniqueName: \"kubernetes.io/projected/f3935933-6552-4a43-aa50-dff067ae1e27-kube-api-access-4t57v\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:51.385939 kubelet[2514]: I0317 18:53:51.385917 2514 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-bpf-maps\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:51.386000 kubelet[2514]: I0317 18:53:51.385989 2514 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f3935933-6552-4a43-aa50-dff067ae1e27-clustermesh-secrets\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:51.386077 kubelet[2514]: I0317 18:53:51.386066 2514 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f3935933-6552-4a43-aa50-dff067ae1e27-cilium-run\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:51.389750 env[1455]: time="2025-03-17T18:53:51.389715494Z" level=info msg="RemoveContainer for \"05441f858a531921af6e09e846cca40c7e1a8380dab886ce66c5eb1b662ae884\" returns successfully" Mar 17 18:53:51.389954 kubelet[2514]: I0317 18:53:51.389939 2514 scope.go:117] "RemoveContainer" containerID="65c20d26d2eb69a84ba9f7ebb77345135bd0a52ce01b2cde47a4c66e68791ec9" Mar 17 18:53:51.391101 env[1455]: time="2025-03-17T18:53:51.391069761Z" level=info msg="RemoveContainer for \"65c20d26d2eb69a84ba9f7ebb77345135bd0a52ce01b2cde47a4c66e68791ec9\"" Mar 17 18:53:51.402132 env[1455]: time="2025-03-17T18:53:51.402098545Z" level=info msg="RemoveContainer for \"65c20d26d2eb69a84ba9f7ebb77345135bd0a52ce01b2cde47a4c66e68791ec9\" returns successfully" Mar 17 18:53:51.402320 kubelet[2514]: I0317 18:53:51.402298 2514 scope.go:117] "RemoveContainer" containerID="3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6" Mar 17 18:53:51.402626 env[1455]: time="2025-03-17T18:53:51.402503514Z" level=error msg="ContainerStatus for \"3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6\": not found" Mar 17 18:53:51.402725 kubelet[2514]: E0317 18:53:51.402696 2514 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6\": not found" containerID="3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6" Mar 17 18:53:51.402812 kubelet[2514]: I0317 18:53:51.402727 2514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6"} err="failed to get container status \"3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6\": rpc error: code = NotFound desc = an error occurred when try to find container \"3499f30d8fef3064ab6b90938eecef7e5c86180ed802d68e209ea826d2935fb6\": not found" Mar 17 18:53:51.402812 kubelet[2514]: I0317 18:53:51.402811 2514 scope.go:117] "RemoveContainer" containerID="87e2f204da006c8db6e454770e45d547b7164a2894f9dba27ceb2b03df1ee62c" Mar 17 18:53:51.403029 env[1455]: time="2025-03-17T18:53:51.402979883Z" level=error msg="ContainerStatus for \"87e2f204da006c8db6e454770e45d547b7164a2894f9dba27ceb2b03df1ee62c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"87e2f204da006c8db6e454770e45d547b7164a2894f9dba27ceb2b03df1ee62c\": not found" Mar 17 18:53:51.403155 kubelet[2514]: E0317 18:53:51.403129 2514 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"87e2f204da006c8db6e454770e45d547b7164a2894f9dba27ceb2b03df1ee62c\": not found" containerID="87e2f204da006c8db6e454770e45d547b7164a2894f9dba27ceb2b03df1ee62c" Mar 17 18:53:51.403204 kubelet[2514]: I0317 18:53:51.403156 2514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"87e2f204da006c8db6e454770e45d547b7164a2894f9dba27ceb2b03df1ee62c"} err="failed to get container status \"87e2f204da006c8db6e454770e45d547b7164a2894f9dba27ceb2b03df1ee62c\": rpc error: code = NotFound desc = an error occurred when try to find container \"87e2f204da006c8db6e454770e45d547b7164a2894f9dba27ceb2b03df1ee62c\": not found" Mar 17 18:53:51.403204 kubelet[2514]: I0317 18:53:51.403171 2514 scope.go:117] "RemoveContainer" containerID="8f15aa1ebf40968a36a785bb705b5f43e84c4da5c333ce71f2375b065a2eb2ce" Mar 17 18:53:51.403464 env[1455]: time="2025-03-17T18:53:51.403383531Z" level=error msg="ContainerStatus for \"8f15aa1ebf40968a36a785bb705b5f43e84c4da5c333ce71f2375b065a2eb2ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f15aa1ebf40968a36a785bb705b5f43e84c4da5c333ce71f2375b065a2eb2ce\": not found" Mar 17 18:53:51.403534 kubelet[2514]: E0317 18:53:51.403509 2514 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f15aa1ebf40968a36a785bb705b5f43e84c4da5c333ce71f2375b065a2eb2ce\": not found" containerID="8f15aa1ebf40968a36a785bb705b5f43e84c4da5c333ce71f2375b065a2eb2ce" Mar 17 18:53:51.403534 kubelet[2514]: I0317 18:53:51.403527 2514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f15aa1ebf40968a36a785bb705b5f43e84c4da5c333ce71f2375b065a2eb2ce"} err="failed to get container status \"8f15aa1ebf40968a36a785bb705b5f43e84c4da5c333ce71f2375b065a2eb2ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f15aa1ebf40968a36a785bb705b5f43e84c4da5c333ce71f2375b065a2eb2ce\": not found" Mar 17 18:53:51.403605 kubelet[2514]: I0317 18:53:51.403540 2514 scope.go:117] "RemoveContainer" containerID="05441f858a531921af6e09e846cca40c7e1a8380dab886ce66c5eb1b662ae884" Mar 17 18:53:51.403830 env[1455]: time="2025-03-17T18:53:51.403749899Z" level=error msg="ContainerStatus for \"05441f858a531921af6e09e846cca40c7e1a8380dab886ce66c5eb1b662ae884\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"05441f858a531921af6e09e846cca40c7e1a8380dab886ce66c5eb1b662ae884\": not found" Mar 17 18:53:51.403892 kubelet[2514]: E0317 18:53:51.403864 2514 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"05441f858a531921af6e09e846cca40c7e1a8380dab886ce66c5eb1b662ae884\": not found" containerID="05441f858a531921af6e09e846cca40c7e1a8380dab886ce66c5eb1b662ae884" Mar 17 18:53:51.403892 kubelet[2514]: I0317 18:53:51.403881 2514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"05441f858a531921af6e09e846cca40c7e1a8380dab886ce66c5eb1b662ae884"} err="failed to get container status \"05441f858a531921af6e09e846cca40c7e1a8380dab886ce66c5eb1b662ae884\": rpc error: code = NotFound desc = an error occurred when try to find container \"05441f858a531921af6e09e846cca40c7e1a8380dab886ce66c5eb1b662ae884\": not found" Mar 17 18:53:51.403943 kubelet[2514]: I0317 18:53:51.403893 2514 scope.go:117] "RemoveContainer" containerID="65c20d26d2eb69a84ba9f7ebb77345135bd0a52ce01b2cde47a4c66e68791ec9" Mar 17 18:53:51.404090 env[1455]: time="2025-03-17T18:53:51.404038665Z" level=error msg="ContainerStatus for \"65c20d26d2eb69a84ba9f7ebb77345135bd0a52ce01b2cde47a4c66e68791ec9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65c20d26d2eb69a84ba9f7ebb77345135bd0a52ce01b2cde47a4c66e68791ec9\": not found" Mar 17 18:53:51.404202 kubelet[2514]: E0317 18:53:51.404176 2514 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65c20d26d2eb69a84ba9f7ebb77345135bd0a52ce01b2cde47a4c66e68791ec9\": not found" containerID="65c20d26d2eb69a84ba9f7ebb77345135bd0a52ce01b2cde47a4c66e68791ec9" Mar 17 18:53:51.404246 kubelet[2514]: I0317 18:53:51.404203 2514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65c20d26d2eb69a84ba9f7ebb77345135bd0a52ce01b2cde47a4c66e68791ec9"} err="failed to get container status \"65c20d26d2eb69a84ba9f7ebb77345135bd0a52ce01b2cde47a4c66e68791ec9\": rpc error: code = NotFound desc = an error occurred when try to find container \"65c20d26d2eb69a84ba9f7ebb77345135bd0a52ce01b2cde47a4c66e68791ec9\": not found" Mar 17 18:53:51.404246 kubelet[2514]: I0317 18:53:51.404219 2514 scope.go:117] "RemoveContainer" containerID="889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b" Mar 17 18:53:51.405456 env[1455]: time="2025-03-17T18:53:51.405211689Z" level=info msg="RemoveContainer for \"889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b\"" Mar 17 18:53:51.422914 env[1455]: time="2025-03-17T18:53:51.422821526Z" level=info msg="RemoveContainer for \"889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b\" returns successfully" Mar 17 18:53:51.422997 kubelet[2514]: I0317 18:53:51.422977 2514 scope.go:117] "RemoveContainer" containerID="889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b" Mar 17 18:53:51.423228 env[1455]: time="2025-03-17T18:53:51.423173013Z" level=error msg="ContainerStatus for \"889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b\": not found" Mar 17 18:53:51.423366 kubelet[2514]: E0317 18:53:51.423336 2514 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b\": not found" containerID="889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b" Mar 17 18:53:51.423403 kubelet[2514]: I0317 18:53:51.423364 2514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b"} err="failed to get container status \"889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b\": rpc error: code = NotFound desc = an error occurred when try to find container \"889b9c7a6dde944e3013f09bab00c2258ac40fa48bdad5befa50d5584ce8e89b\": not found" Mar 17 18:53:51.831255 kubelet[2514]: I0317 18:53:51.831213 2514 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d72697b-6ec9-491e-b7e2-97b6a32b1119" path="/var/lib/kubelet/pods/3d72697b-6ec9-491e-b7e2-97b6a32b1119/volumes" Mar 17 18:53:51.831667 kubelet[2514]: I0317 18:53:51.831644 2514 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3935933-6552-4a43-aa50-dff067ae1e27" path="/var/lib/kubelet/pods/f3935933-6552-4a43-aa50-dff067ae1e27/volumes" Mar 17 18:53:51.932446 kubelet[2514]: E0317 18:53:51.932414 2514 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:53:52.030474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d4aa9218a368fd2750bb56d1d810b8bd93cd3847434ac461c959e9b5ac62cfa-rootfs.mount: Deactivated successfully. Mar 17 18:53:52.030570 systemd[1]: var-lib-kubelet-pods-3d72697b\x2d6ec9\x2d491e\x2db7e2\x2d97b6a32b1119-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4986s.mount: Deactivated successfully. Mar 17 18:53:52.030652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0989055b5442d849bd9b3f56f49948352c0f2e006f4bed67d05d4d9c1639757a-rootfs.mount: Deactivated successfully. Mar 17 18:53:52.030710 systemd[1]: var-lib-kubelet-pods-f3935933\x2d6552\x2d4a43\x2daa50\x2ddff067ae1e27-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4t57v.mount: Deactivated successfully. Mar 17 18:53:52.030760 systemd[1]: var-lib-kubelet-pods-f3935933\x2d6552\x2d4a43\x2daa50\x2ddff067ae1e27-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:53:52.030811 systemd[1]: var-lib-kubelet-pods-f3935933\x2d6552\x2d4a43\x2daa50\x2ddff067ae1e27-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:53:53.044322 sshd[4086]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:53.047138 systemd-logind[1442]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:53:53.047296 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:53:53.048138 systemd[1]: sshd@21-10.200.20.36:22-10.200.16.10:47608.service: Deactivated successfully. Mar 17 18:53:53.049177 systemd-logind[1442]: Removed session 24. Mar 17 18:53:53.133009 systemd[1]: Started sshd@22-10.200.20.36:22-10.200.16.10:47618.service. Mar 17 18:53:53.574406 sshd[4256]: Accepted publickey for core from 10.200.16.10 port 47618 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:53.576051 sshd[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:53.580373 systemd[1]: Started session-25.scope. Mar 17 18:53:53.581656 systemd-logind[1442]: New session 25 of user core. Mar 17 18:53:55.363260 kubelet[2514]: I0317 18:53:55.363208 2514 topology_manager.go:215] "Topology Admit Handler" podUID="f65017b6-dc54-4543-8304-b65c13f2daf7" podNamespace="kube-system" podName="cilium-8tx6v" Mar 17 18:53:55.363622 kubelet[2514]: E0317 18:53:55.363275 2514 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f3935933-6552-4a43-aa50-dff067ae1e27" containerName="mount-bpf-fs" Mar 17 18:53:55.363622 kubelet[2514]: E0317 18:53:55.363285 2514 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f3935933-6552-4a43-aa50-dff067ae1e27" containerName="mount-cgroup" Mar 17 18:53:55.363622 kubelet[2514]: E0317 18:53:55.363291 2514 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f3935933-6552-4a43-aa50-dff067ae1e27" containerName="apply-sysctl-overwrites" Mar 17 18:53:55.363622 kubelet[2514]: E0317 18:53:55.363308 2514 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f3935933-6552-4a43-aa50-dff067ae1e27" containerName="clean-cilium-state" Mar 17 18:53:55.363622 kubelet[2514]: E0317 18:53:55.363314 2514 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d72697b-6ec9-491e-b7e2-97b6a32b1119" containerName="cilium-operator" Mar 17 18:53:55.363622 kubelet[2514]: E0317 18:53:55.363321 2514 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f3935933-6552-4a43-aa50-dff067ae1e27" containerName="cilium-agent" Mar 17 18:53:55.363622 kubelet[2514]: I0317 18:53:55.363347 2514 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d72697b-6ec9-491e-b7e2-97b6a32b1119" containerName="cilium-operator" Mar 17 18:53:55.363622 kubelet[2514]: I0317 18:53:55.363353 2514 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3935933-6552-4a43-aa50-dff067ae1e27" containerName="cilium-agent" Mar 17 18:53:55.369078 systemd[1]: Created slice kubepods-burstable-podf65017b6_dc54_4543_8304_b65c13f2daf7.slice. Mar 17 18:53:55.375956 sshd[4256]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:55.379254 systemd[1]: sshd@22-10.200.20.36:22-10.200.16.10:47618.service: Deactivated successfully. Mar 17 18:53:55.379958 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 18:53:55.380109 systemd[1]: session-25.scope: Consumed 1.399s CPU time. Mar 17 18:53:55.381085 systemd-logind[1442]: Session 25 logged out. Waiting for processes to exit. Mar 17 18:53:55.382266 systemd-logind[1442]: Removed session 25. Mar 17 18:53:55.406895 kubelet[2514]: I0317 18:53:55.406859 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-cilium-run\") pod \"cilium-8tx6v\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " pod="kube-system/cilium-8tx6v" Mar 17 18:53:55.407096 kubelet[2514]: I0317 18:53:55.407080 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-etc-cni-netd\") pod \"cilium-8tx6v\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " pod="kube-system/cilium-8tx6v" Mar 17 18:53:55.407184 kubelet[2514]: I0317 18:53:55.407173 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-xtables-lock\") pod \"cilium-8tx6v\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " pod="kube-system/cilium-8tx6v" Mar 17 18:53:55.407251 kubelet[2514]: I0317 18:53:55.407240 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-host-proc-sys-net\") pod \"cilium-8tx6v\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " pod="kube-system/cilium-8tx6v" Mar 17 18:53:55.407360 kubelet[2514]: I0317 18:53:55.407349 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsrg7\" (UniqueName: \"kubernetes.io/projected/f65017b6-dc54-4543-8304-b65c13f2daf7-kube-api-access-qsrg7\") pod \"cilium-8tx6v\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " pod="kube-system/cilium-8tx6v" Mar 17 18:53:55.407452 kubelet[2514]: I0317 18:53:55.407440 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-cni-path\") pod \"cilium-8tx6v\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " pod="kube-system/cilium-8tx6v" Mar 17 18:53:55.407518 kubelet[2514]: I0317 18:53:55.407506 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-cilium-cgroup\") pod \"cilium-8tx6v\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " pod="kube-system/cilium-8tx6v" Mar 17 18:53:55.407594 kubelet[2514]: I0317 18:53:55.407567 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-lib-modules\") pod \"cilium-8tx6v\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " pod="kube-system/cilium-8tx6v" Mar 17 18:53:55.407672 kubelet[2514]: I0317 18:53:55.407659 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f65017b6-dc54-4543-8304-b65c13f2daf7-cilium-config-path\") pod \"cilium-8tx6v\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " pod="kube-system/cilium-8tx6v" Mar 17 18:53:55.407747 kubelet[2514]: I0317 18:53:55.407732 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f65017b6-dc54-4543-8304-b65c13f2daf7-clustermesh-secrets\") pod \"cilium-8tx6v\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " pod="kube-system/cilium-8tx6v" Mar 17 18:53:55.407814 kubelet[2514]: I0317 18:53:55.407801 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f65017b6-dc54-4543-8304-b65c13f2daf7-hubble-tls\") pod \"cilium-8tx6v\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " pod="kube-system/cilium-8tx6v" Mar 17 18:53:55.407892 kubelet[2514]: I0317 18:53:55.407878 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-bpf-maps\") pod \"cilium-8tx6v\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " pod="kube-system/cilium-8tx6v" Mar 17 18:53:55.407957 kubelet[2514]: I0317 18:53:55.407946 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-hostproc\") pod \"cilium-8tx6v\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " pod="kube-system/cilium-8tx6v" Mar 17 18:53:55.408023 kubelet[2514]: I0317 18:53:55.408012 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f65017b6-dc54-4543-8304-b65c13f2daf7-cilium-ipsec-secrets\") pod \"cilium-8tx6v\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " pod="kube-system/cilium-8tx6v" Mar 17 18:53:55.408092 kubelet[2514]: I0317 18:53:55.408080 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-host-proc-sys-kernel\") pod \"cilium-8tx6v\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " pod="kube-system/cilium-8tx6v" Mar 17 18:53:55.456358 systemd[1]: Started sshd@23-10.200.20.36:22-10.200.16.10:47622.service. Mar 17 18:53:55.672953 env[1455]: time="2025-03-17T18:53:55.672527192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8tx6v,Uid:f65017b6-dc54-4543-8304-b65c13f2daf7,Namespace:kube-system,Attempt:0,}" Mar 17 18:53:55.721558 env[1455]: time="2025-03-17T18:53:55.721479761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:53:55.721558 env[1455]: time="2025-03-17T18:53:55.721521201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:53:55.721772 env[1455]: time="2025-03-17T18:53:55.721539762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:53:55.722091 env[1455]: time="2025-03-17T18:53:55.721999091Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ae894d5ba68f1b5ce181e800539bd4706d954470e1190b269471c9b931eedb4 pid=4279 runtime=io.containerd.runc.v2 Mar 17 18:53:55.735431 systemd[1]: Started cri-containerd-1ae894d5ba68f1b5ce181e800539bd4706d954470e1190b269471c9b931eedb4.scope. Mar 17 18:53:55.758721 env[1455]: time="2025-03-17T18:53:55.758674896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8tx6v,Uid:f65017b6-dc54-4543-8304-b65c13f2daf7,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ae894d5ba68f1b5ce181e800539bd4706d954470e1190b269471c9b931eedb4\"" Mar 17 18:53:55.762788 env[1455]: time="2025-03-17T18:53:55.762748977Z" level=info msg="CreateContainer within sandbox \"1ae894d5ba68f1b5ce181e800539bd4706d954470e1190b269471c9b931eedb4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:53:55.815783 env[1455]: time="2025-03-17T18:53:55.815708345Z" level=info msg="CreateContainer within sandbox \"1ae894d5ba68f1b5ce181e800539bd4706d954470e1190b269471c9b931eedb4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c8ee1174dbcc7b76e7a797fcdaf9bb763207bef31fe544a5f23e3489ad1b3fd2\"" Mar 17 18:53:55.816919 env[1455]: time="2025-03-17T18:53:55.816419719Z" level=info msg="StartContainer for \"c8ee1174dbcc7b76e7a797fcdaf9bb763207bef31fe544a5f23e3489ad1b3fd2\"" Mar 17 18:53:55.835171 systemd[1]: Started cri-containerd-c8ee1174dbcc7b76e7a797fcdaf9bb763207bef31fe544a5f23e3489ad1b3fd2.scope. Mar 17 18:53:55.845771 systemd[1]: cri-containerd-c8ee1174dbcc7b76e7a797fcdaf9bb763207bef31fe544a5f23e3489ad1b3fd2.scope: Deactivated successfully. Mar 17 18:53:55.846026 systemd[1]: Stopped cri-containerd-c8ee1174dbcc7b76e7a797fcdaf9bb763207bef31fe544a5f23e3489ad1b3fd2.scope. Mar 17 18:53:55.917909 env[1455]: time="2025-03-17T18:53:55.917859406Z" level=info msg="shim disconnected" id=c8ee1174dbcc7b76e7a797fcdaf9bb763207bef31fe544a5f23e3489ad1b3fd2 Mar 17 18:53:55.918132 env[1455]: time="2025-03-17T18:53:55.918114611Z" level=warning msg="cleaning up after shim disconnected" id=c8ee1174dbcc7b76e7a797fcdaf9bb763207bef31fe544a5f23e3489ad1b3fd2 namespace=k8s.io Mar 17 18:53:55.918222 env[1455]: time="2025-03-17T18:53:55.918179332Z" level=info msg="cleaning up dead shim" Mar 17 18:53:55.925524 env[1455]: time="2025-03-17T18:53:55.925425035Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:53:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4337 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T18:53:55Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c8ee1174dbcc7b76e7a797fcdaf9bb763207bef31fe544a5f23e3489ad1b3fd2/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 18:53:55.925981 env[1455]: time="2025-03-17T18:53:55.925884324Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Mar 17 18:53:55.926279 env[1455]: time="2025-03-17T18:53:55.926249132Z" level=error msg="Failed to pipe stdout of container \"c8ee1174dbcc7b76e7a797fcdaf9bb763207bef31fe544a5f23e3489ad1b3fd2\"" error="reading from a closed fifo" Mar 17 18:53:55.926414 env[1455]: time="2025-03-17T18:53:55.926372294Z" level=error msg="Failed to pipe stderr of container \"c8ee1174dbcc7b76e7a797fcdaf9bb763207bef31fe544a5f23e3489ad1b3fd2\"" error="reading from a closed fifo" Mar 17 18:53:55.931440 env[1455]: time="2025-03-17T18:53:55.931385953Z" level=error msg="StartContainer for \"c8ee1174dbcc7b76e7a797fcdaf9bb763207bef31fe544a5f23e3489ad1b3fd2\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 18:53:55.931714 kubelet[2514]: E0317 18:53:55.931671 2514 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c8ee1174dbcc7b76e7a797fcdaf9bb763207bef31fe544a5f23e3489ad1b3fd2" Mar 17 18:53:55.932681 kubelet[2514]: E0317 18:53:55.931846 2514 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 18:53:55.932681 kubelet[2514]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 18:53:55.932681 kubelet[2514]: rm /hostbin/cilium-mount Mar 17 18:53:55.932809 kubelet[2514]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qsrg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-8tx6v_kube-system(f65017b6-dc54-4543-8304-b65c13f2daf7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 18:53:55.932809 kubelet[2514]: E0317 18:53:55.931893 2514 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-8tx6v" podUID="f65017b6-dc54-4543-8304-b65c13f2daf7" Mar 17 18:53:55.938171 sshd[4266]: Accepted publickey for core from 10.200.16.10 port 47622 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:55.938995 sshd[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:55.944854 systemd[1]: Started session-26.scope. Mar 17 18:53:55.946106 systemd-logind[1442]: New session 26 of user core. Mar 17 18:53:56.312766 env[1455]: time="2025-03-17T18:53:56.312729058Z" level=info msg="StopPodSandbox for \"1ae894d5ba68f1b5ce181e800539bd4706d954470e1190b269471c9b931eedb4\"" Mar 17 18:53:56.312984 env[1455]: time="2025-03-17T18:53:56.312959303Z" level=info msg="Container to stop \"c8ee1174dbcc7b76e7a797fcdaf9bb763207bef31fe544a5f23e3489ad1b3fd2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:53:56.319176 systemd[1]: cri-containerd-1ae894d5ba68f1b5ce181e800539bd4706d954470e1190b269471c9b931eedb4.scope: Deactivated successfully. Mar 17 18:53:56.366146 env[1455]: time="2025-03-17T18:53:56.366083507Z" level=info msg="shim disconnected" id=1ae894d5ba68f1b5ce181e800539bd4706d954470e1190b269471c9b931eedb4 Mar 17 18:53:56.366146 env[1455]: time="2025-03-17T18:53:56.366136868Z" level=warning msg="cleaning up after shim disconnected" id=1ae894d5ba68f1b5ce181e800539bd4706d954470e1190b269471c9b931eedb4 namespace=k8s.io Mar 17 18:53:56.366146 env[1455]: time="2025-03-17T18:53:56.366147868Z" level=info msg="cleaning up dead shim" Mar 17 18:53:56.372937 env[1455]: time="2025-03-17T18:53:56.372889601Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:53:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4374 runtime=io.containerd.runc.v2\n" Mar 17 18:53:56.373222 env[1455]: time="2025-03-17T18:53:56.373192607Z" level=info msg="TearDown network for sandbox \"1ae894d5ba68f1b5ce181e800539bd4706d954470e1190b269471c9b931eedb4\" successfully" Mar 17 18:53:56.373280 env[1455]: time="2025-03-17T18:53:56.373221007Z" level=info msg="StopPodSandbox for \"1ae894d5ba68f1b5ce181e800539bd4706d954470e1190b269471c9b931eedb4\" returns successfully" Mar 17 18:53:56.377072 sshd[4266]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:56.380262 systemd[1]: sshd@23-10.200.20.36:22-10.200.16.10:47622.service: Deactivated successfully. Mar 17 18:53:56.380964 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 18:53:56.382419 systemd-logind[1442]: Session 26 logged out. Waiting for processes to exit. Mar 17 18:53:56.386762 systemd-logind[1442]: Removed session 26. Mar 17 18:53:56.413851 kubelet[2514]: I0317 18:53:56.413295 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-host-proc-sys-kernel\") pod \"f65017b6-dc54-4543-8304-b65c13f2daf7\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " Mar 17 18:53:56.413851 kubelet[2514]: I0317 18:53:56.413353 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f65017b6-dc54-4543-8304-b65c13f2daf7-cilium-config-path\") pod \"f65017b6-dc54-4543-8304-b65c13f2daf7\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " Mar 17 18:53:56.413851 kubelet[2514]: I0317 18:53:56.413389 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f65017b6-dc54-4543-8304-b65c13f2daf7-clustermesh-secrets\") pod \"f65017b6-dc54-4543-8304-b65c13f2daf7\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " Mar 17 18:53:56.413851 kubelet[2514]: I0317 18:53:56.413408 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-xtables-lock\") pod \"f65017b6-dc54-4543-8304-b65c13f2daf7\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " Mar 17 18:53:56.413851 kubelet[2514]: I0317 18:53:56.413424 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-host-proc-sys-net\") pod \"f65017b6-dc54-4543-8304-b65c13f2daf7\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " Mar 17 18:53:56.413851 kubelet[2514]: I0317 18:53:56.413446 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-etc-cni-netd\") pod \"f65017b6-dc54-4543-8304-b65c13f2daf7\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " Mar 17 18:53:56.413851 kubelet[2514]: I0317 18:53:56.413461 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-bpf-maps\") pod \"f65017b6-dc54-4543-8304-b65c13f2daf7\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " Mar 17 18:53:56.413851 kubelet[2514]: I0317 18:53:56.413480 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsrg7\" (UniqueName: \"kubernetes.io/projected/f65017b6-dc54-4543-8304-b65c13f2daf7-kube-api-access-qsrg7\") pod \"f65017b6-dc54-4543-8304-b65c13f2daf7\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " Mar 17 18:53:56.413851 kubelet[2514]: I0317 18:53:56.413496 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-lib-modules\") pod \"f65017b6-dc54-4543-8304-b65c13f2daf7\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " Mar 17 18:53:56.413851 kubelet[2514]: I0317 18:53:56.413513 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f65017b6-dc54-4543-8304-b65c13f2daf7-hubble-tls\") pod \"f65017b6-dc54-4543-8304-b65c13f2daf7\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " Mar 17 18:53:56.413851 kubelet[2514]: I0317 18:53:56.413539 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-hostproc\") pod \"f65017b6-dc54-4543-8304-b65c13f2daf7\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " Mar 17 18:53:56.413851 kubelet[2514]: I0317 18:53:56.413559 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f65017b6-dc54-4543-8304-b65c13f2daf7-cilium-ipsec-secrets\") pod \"f65017b6-dc54-4543-8304-b65c13f2daf7\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " Mar 17 18:53:56.413851 kubelet[2514]: I0317 18:53:56.413576 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-cilium-run\") pod \"f65017b6-dc54-4543-8304-b65c13f2daf7\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " Mar 17 18:53:56.413851 kubelet[2514]: I0317 18:53:56.413611 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-cilium-cgroup\") pod \"f65017b6-dc54-4543-8304-b65c13f2daf7\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " Mar 17 18:53:56.413851 kubelet[2514]: I0317 18:53:56.413626 2514 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-cni-path\") pod \"f65017b6-dc54-4543-8304-b65c13f2daf7\" (UID: \"f65017b6-dc54-4543-8304-b65c13f2daf7\") " Mar 17 18:53:56.413851 kubelet[2514]: I0317 18:53:56.413347 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f65017b6-dc54-4543-8304-b65c13f2daf7" (UID: "f65017b6-dc54-4543-8304-b65c13f2daf7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:53:56.414512 kubelet[2514]: I0317 18:53:56.413686 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-cni-path" (OuterVolumeSpecName: "cni-path") pod "f65017b6-dc54-4543-8304-b65c13f2daf7" (UID: "f65017b6-dc54-4543-8304-b65c13f2daf7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:53:56.414512 kubelet[2514]: I0317 18:53:56.414027 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f65017b6-dc54-4543-8304-b65c13f2daf7" (UID: "f65017b6-dc54-4543-8304-b65c13f2daf7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:53:56.414512 kubelet[2514]: I0317 18:53:56.414053 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f65017b6-dc54-4543-8304-b65c13f2daf7" (UID: "f65017b6-dc54-4543-8304-b65c13f2daf7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:53:56.414512 kubelet[2514]: I0317 18:53:56.414076 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f65017b6-dc54-4543-8304-b65c13f2daf7" (UID: "f65017b6-dc54-4543-8304-b65c13f2daf7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:53:56.414512 kubelet[2514]: I0317 18:53:56.414093 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f65017b6-dc54-4543-8304-b65c13f2daf7" (UID: "f65017b6-dc54-4543-8304-b65c13f2daf7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:53:56.414512 kubelet[2514]: I0317 18:53:56.414106 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f65017b6-dc54-4543-8304-b65c13f2daf7" (UID: "f65017b6-dc54-4543-8304-b65c13f2daf7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:53:56.417085 kubelet[2514]: I0317 18:53:56.417049 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-hostproc" (OuterVolumeSpecName: "hostproc") pod "f65017b6-dc54-4543-8304-b65c13f2daf7" (UID: "f65017b6-dc54-4543-8304-b65c13f2daf7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:53:56.417662 kubelet[2514]: I0317 18:53:56.417550 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f65017b6-dc54-4543-8304-b65c13f2daf7" (UID: "f65017b6-dc54-4543-8304-b65c13f2daf7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:53:56.417733 kubelet[2514]: I0317 18:53:56.417719 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f65017b6-dc54-4543-8304-b65c13f2daf7" (UID: "f65017b6-dc54-4543-8304-b65c13f2daf7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:53:56.419836 kubelet[2514]: I0317 18:53:56.419803 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65017b6-dc54-4543-8304-b65c13f2daf7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f65017b6-dc54-4543-8304-b65c13f2daf7" (UID: "f65017b6-dc54-4543-8304-b65c13f2daf7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:53:56.419928 kubelet[2514]: I0317 18:53:56.419876 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65017b6-dc54-4543-8304-b65c13f2daf7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f65017b6-dc54-4543-8304-b65c13f2daf7" (UID: "f65017b6-dc54-4543-8304-b65c13f2daf7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:53:56.420177 kubelet[2514]: I0317 18:53:56.420148 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65017b6-dc54-4543-8304-b65c13f2daf7-kube-api-access-qsrg7" (OuterVolumeSpecName: "kube-api-access-qsrg7") pod "f65017b6-dc54-4543-8304-b65c13f2daf7" (UID: "f65017b6-dc54-4543-8304-b65c13f2daf7"). InnerVolumeSpecName "kube-api-access-qsrg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:53:56.420490 kubelet[2514]: I0317 18:53:56.420465 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65017b6-dc54-4543-8304-b65c13f2daf7-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f65017b6-dc54-4543-8304-b65c13f2daf7" (UID: "f65017b6-dc54-4543-8304-b65c13f2daf7"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:53:56.421047 kubelet[2514]: I0317 18:53:56.421022 2514 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65017b6-dc54-4543-8304-b65c13f2daf7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f65017b6-dc54-4543-8304-b65c13f2daf7" (UID: "f65017b6-dc54-4543-8304-b65c13f2daf7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:53:56.450629 systemd[1]: Started sshd@24-10.200.20.36:22-10.200.16.10:47626.service. Mar 17 18:53:56.513846 kubelet[2514]: I0317 18:53:56.513813 2514 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-lib-modules\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:56.514029 kubelet[2514]: I0317 18:53:56.514013 2514 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f65017b6-dc54-4543-8304-b65c13f2daf7-hubble-tls\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:56.514133 kubelet[2514]: I0317 18:53:56.514122 2514 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-hostproc\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:56.514210 kubelet[2514]: I0317 18:53:56.514199 2514 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qsrg7\" (UniqueName: \"kubernetes.io/projected/f65017b6-dc54-4543-8304-b65c13f2daf7-kube-api-access-qsrg7\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:56.514281 kubelet[2514]: I0317 18:53:56.514259 2514 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f65017b6-dc54-4543-8304-b65c13f2daf7-cilium-ipsec-secrets\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:56.514350 kubelet[2514]: I0317 18:53:56.514329 2514 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-cilium-run\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:56.514414 kubelet[2514]: I0317 18:53:56.514403 2514 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-cilium-cgroup\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:56.514486 kubelet[2514]: I0317 18:53:56.514477 2514 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-cni-path\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:56.514560 kubelet[2514]: I0317 18:53:56.514536 2514 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-host-proc-sys-kernel\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:56.514663 kubelet[2514]: I0317 18:53:56.514651 2514 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f65017b6-dc54-4543-8304-b65c13f2daf7-cilium-config-path\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:56.514740 kubelet[2514]: I0317 18:53:56.514730 2514 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f65017b6-dc54-4543-8304-b65c13f2daf7-clustermesh-secrets\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:56.514823 kubelet[2514]: I0317 18:53:56.514813 2514 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-xtables-lock\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:56.514903 kubelet[2514]: I0317 18:53:56.514874 2514 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-host-proc-sys-net\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:56.514980 kubelet[2514]: I0317 18:53:56.514952 2514 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-etc-cni-netd\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:56.515039 kubelet[2514]: I0317 18:53:56.515029 2514 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f65017b6-dc54-4543-8304-b65c13f2daf7-bpf-maps\") on node \"ci-3510.3.7-a-ffee15dd16\" DevicePath \"\"" Mar 17 18:53:56.518329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ae894d5ba68f1b5ce181e800539bd4706d954470e1190b269471c9b931eedb4-rootfs.mount: Deactivated successfully. Mar 17 18:53:56.518432 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ae894d5ba68f1b5ce181e800539bd4706d954470e1190b269471c9b931eedb4-shm.mount: Deactivated successfully. Mar 17 18:53:56.518493 systemd[1]: var-lib-kubelet-pods-f65017b6\x2ddc54\x2d4543\x2d8304\x2db65c13f2daf7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqsrg7.mount: Deactivated successfully. Mar 17 18:53:56.518548 systemd[1]: var-lib-kubelet-pods-f65017b6\x2ddc54\x2d4543\x2d8304\x2db65c13f2daf7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:53:56.518656 systemd[1]: var-lib-kubelet-pods-f65017b6\x2ddc54\x2d4543\x2d8304\x2db65c13f2daf7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:53:56.518710 systemd[1]: var-lib-kubelet-pods-f65017b6\x2ddc54\x2d4543\x2d8304\x2db65c13f2daf7-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:53:56.712384 kubelet[2514]: I0317 18:53:56.710400 2514 setters.go:580] "Node became not ready" node="ci-3510.3.7-a-ffee15dd16" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:53:56Z","lastTransitionTime":"2025-03-17T18:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:53:56.892801 sshd[4394]: Accepted publickey for core from 10.200.16.10 port 47626 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:56.894074 sshd[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:56.897994 systemd-logind[1442]: New session 27 of user core. Mar 17 18:53:56.898397 systemd[1]: Started session-27.scope. Mar 17 18:53:56.933934 kubelet[2514]: E0317 18:53:56.933891 2514 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:53:57.315147 kubelet[2514]: I0317 18:53:57.315115 2514 scope.go:117] "RemoveContainer" containerID="c8ee1174dbcc7b76e7a797fcdaf9bb763207bef31fe544a5f23e3489ad1b3fd2" Mar 17 18:53:57.316454 env[1455]: time="2025-03-17T18:53:57.316413669Z" level=info msg="RemoveContainer for \"c8ee1174dbcc7b76e7a797fcdaf9bb763207bef31fe544a5f23e3489ad1b3fd2\"" Mar 17 18:53:57.319509 systemd[1]: Removed slice kubepods-burstable-podf65017b6_dc54_4543_8304_b65c13f2daf7.slice. Mar 17 18:53:57.331477 env[1455]: time="2025-03-17T18:53:57.331429283Z" level=info msg="RemoveContainer for \"c8ee1174dbcc7b76e7a797fcdaf9bb763207bef31fe544a5f23e3489ad1b3fd2\" returns successfully" Mar 17 18:53:57.370967 kubelet[2514]: I0317 18:53:57.370930 2514 topology_manager.go:215] "Topology Admit Handler" podUID="c1362699-a1fe-4d96-bb53-2ad2e65daacc" podNamespace="kube-system" podName="cilium-sff2s" Mar 17 18:53:57.371186 kubelet[2514]: E0317 18:53:57.371172 2514 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f65017b6-dc54-4543-8304-b65c13f2daf7" containerName="mount-cgroup" Mar 17 18:53:57.371291 kubelet[2514]: I0317 18:53:57.371280 2514 memory_manager.go:354] "RemoveStaleState removing state" podUID="f65017b6-dc54-4543-8304-b65c13f2daf7" containerName="mount-cgroup" Mar 17 18:53:57.377049 systemd[1]: Created slice kubepods-burstable-podc1362699_a1fe_4d96_bb53_2ad2e65daacc.slice. Mar 17 18:53:57.422451 kubelet[2514]: I0317 18:53:57.422122 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c1362699-a1fe-4d96-bb53-2ad2e65daacc-cilium-ipsec-secrets\") pod \"cilium-sff2s\" (UID: \"c1362699-a1fe-4d96-bb53-2ad2e65daacc\") " pod="kube-system/cilium-sff2s" Mar 17 18:53:57.422929 kubelet[2514]: I0317 18:53:57.422907 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1362699-a1fe-4d96-bb53-2ad2e65daacc-host-proc-sys-kernel\") pod \"cilium-sff2s\" (UID: \"c1362699-a1fe-4d96-bb53-2ad2e65daacc\") " pod="kube-system/cilium-sff2s" Mar 17 18:53:57.423047 kubelet[2514]: I0317 18:53:57.423033 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1362699-a1fe-4d96-bb53-2ad2e65daacc-etc-cni-netd\") pod \"cilium-sff2s\" (UID: \"c1362699-a1fe-4d96-bb53-2ad2e65daacc\") " pod="kube-system/cilium-sff2s" Mar 17 18:53:57.423133 kubelet[2514]: I0317 18:53:57.423118 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcxvt\" (UniqueName: \"kubernetes.io/projected/c1362699-a1fe-4d96-bb53-2ad2e65daacc-kube-api-access-jcxvt\") pod \"cilium-sff2s\" (UID: \"c1362699-a1fe-4d96-bb53-2ad2e65daacc\") " pod="kube-system/cilium-sff2s" Mar 17 18:53:57.423231 kubelet[2514]: I0317 18:53:57.423218 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1362699-a1fe-4d96-bb53-2ad2e65daacc-cilium-cgroup\") pod \"cilium-sff2s\" (UID: \"c1362699-a1fe-4d96-bb53-2ad2e65daacc\") " pod="kube-system/cilium-sff2s" Mar 17 18:53:57.423335 kubelet[2514]: I0317 18:53:57.423321 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1362699-a1fe-4d96-bb53-2ad2e65daacc-hostproc\") pod \"cilium-sff2s\" (UID: \"c1362699-a1fe-4d96-bb53-2ad2e65daacc\") " pod="kube-system/cilium-sff2s" Mar 17 18:53:57.423438 kubelet[2514]: I0317 18:53:57.423424 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1362699-a1fe-4d96-bb53-2ad2e65daacc-cni-path\") pod \"cilium-sff2s\" (UID: \"c1362699-a1fe-4d96-bb53-2ad2e65daacc\") " pod="kube-system/cilium-sff2s" Mar 17 18:53:57.423528 kubelet[2514]: I0317 18:53:57.423514 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1362699-a1fe-4d96-bb53-2ad2e65daacc-host-proc-sys-net\") pod \"cilium-sff2s\" (UID: \"c1362699-a1fe-4d96-bb53-2ad2e65daacc\") " pod="kube-system/cilium-sff2s" Mar 17 18:53:57.423632 kubelet[2514]: I0317 18:53:57.423619 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1362699-a1fe-4d96-bb53-2ad2e65daacc-bpf-maps\") pod \"cilium-sff2s\" (UID: \"c1362699-a1fe-4d96-bb53-2ad2e65daacc\") " pod="kube-system/cilium-sff2s" Mar 17 18:53:57.423742 kubelet[2514]: I0317 18:53:57.423726 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1362699-a1fe-4d96-bb53-2ad2e65daacc-clustermesh-secrets\") pod \"cilium-sff2s\" (UID: \"c1362699-a1fe-4d96-bb53-2ad2e65daacc\") " pod="kube-system/cilium-sff2s" Mar 17 18:53:57.423836 kubelet[2514]: I0317 18:53:57.423822 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1362699-a1fe-4d96-bb53-2ad2e65daacc-xtables-lock\") pod \"cilium-sff2s\" (UID: \"c1362699-a1fe-4d96-bb53-2ad2e65daacc\") " pod="kube-system/cilium-sff2s" Mar 17 18:53:57.423931 kubelet[2514]: I0317 18:53:57.423917 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1362699-a1fe-4d96-bb53-2ad2e65daacc-cilium-config-path\") pod \"cilium-sff2s\" (UID: \"c1362699-a1fe-4d96-bb53-2ad2e65daacc\") " pod="kube-system/cilium-sff2s" Mar 17 18:53:57.424025 kubelet[2514]: I0317 18:53:57.424012 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1362699-a1fe-4d96-bb53-2ad2e65daacc-cilium-run\") pod \"cilium-sff2s\" (UID: \"c1362699-a1fe-4d96-bb53-2ad2e65daacc\") " pod="kube-system/cilium-sff2s" Mar 17 18:53:57.424117 kubelet[2514]: I0317 18:53:57.424103 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1362699-a1fe-4d96-bb53-2ad2e65daacc-lib-modules\") pod \"cilium-sff2s\" (UID: \"c1362699-a1fe-4d96-bb53-2ad2e65daacc\") " pod="kube-system/cilium-sff2s" Mar 17 18:53:57.424207 kubelet[2514]: I0317 18:53:57.424195 2514 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1362699-a1fe-4d96-bb53-2ad2e65daacc-hubble-tls\") pod \"cilium-sff2s\" (UID: \"c1362699-a1fe-4d96-bb53-2ad2e65daacc\") " pod="kube-system/cilium-sff2s" Mar 17 18:53:57.680026 env[1455]: time="2025-03-17T18:53:57.679916690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sff2s,Uid:c1362699-a1fe-4d96-bb53-2ad2e65daacc,Namespace:kube-system,Attempt:0,}" Mar 17 18:53:57.733679 env[1455]: time="2025-03-17T18:53:57.733613419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:53:57.733811 env[1455]: time="2025-03-17T18:53:57.733688020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:53:57.733811 env[1455]: time="2025-03-17T18:53:57.733713141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:53:57.734119 env[1455]: time="2025-03-17T18:53:57.733956466Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3259638fa2a3bd402039568a69a793079c23477b81ed3226ba0a73ce7a48eb55 pid=4414 runtime=io.containerd.runc.v2 Mar 17 18:53:57.744523 systemd[1]: Started cri-containerd-3259638fa2a3bd402039568a69a793079c23477b81ed3226ba0a73ce7a48eb55.scope. Mar 17 18:53:57.768897 env[1455]: time="2025-03-17T18:53:57.768851547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sff2s,Uid:c1362699-a1fe-4d96-bb53-2ad2e65daacc,Namespace:kube-system,Attempt:0,} returns sandbox id \"3259638fa2a3bd402039568a69a793079c23477b81ed3226ba0a73ce7a48eb55\"" Mar 17 18:53:57.773303 env[1455]: time="2025-03-17T18:53:57.773268754Z" level=info msg="CreateContainer within sandbox \"3259638fa2a3bd402039568a69a793079c23477b81ed3226ba0a73ce7a48eb55\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:53:57.832629 kubelet[2514]: I0317 18:53:57.832575 2514 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65017b6-dc54-4543-8304-b65c13f2daf7" path="/var/lib/kubelet/pods/f65017b6-dc54-4543-8304-b65c13f2daf7/volumes" Mar 17 18:53:57.836869 env[1455]: time="2025-03-17T18:53:57.836832115Z" level=info msg="CreateContainer within sandbox \"3259638fa2a3bd402039568a69a793079c23477b81ed3226ba0a73ce7a48eb55\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"135c97f6f63bfefe56ff1736970a673bf19c38ef19bbf31690222a5c832ff930\"" Mar 17 18:53:57.837577 env[1455]: time="2025-03-17T18:53:57.837545449Z" level=info msg="StartContainer for \"135c97f6f63bfefe56ff1736970a673bf19c38ef19bbf31690222a5c832ff930\"" Mar 17 18:53:57.851470 systemd[1]: Started cri-containerd-135c97f6f63bfefe56ff1736970a673bf19c38ef19bbf31690222a5c832ff930.scope. Mar 17 18:53:57.887959 env[1455]: time="2025-03-17T18:53:57.887916153Z" level=info msg="StartContainer for \"135c97f6f63bfefe56ff1736970a673bf19c38ef19bbf31690222a5c832ff930\" returns successfully" Mar 17 18:53:57.891752 systemd[1]: cri-containerd-135c97f6f63bfefe56ff1736970a673bf19c38ef19bbf31690222a5c832ff930.scope: Deactivated successfully. Mar 17 18:53:57.969979 env[1455]: time="2025-03-17T18:53:57.969869314Z" level=info msg="shim disconnected" id=135c97f6f63bfefe56ff1736970a673bf19c38ef19bbf31690222a5c832ff930 Mar 17 18:53:57.970223 env[1455]: time="2025-03-17T18:53:57.970202601Z" level=warning msg="cleaning up after shim disconnected" id=135c97f6f63bfefe56ff1736970a673bf19c38ef19bbf31690222a5c832ff930 namespace=k8s.io Mar 17 18:53:57.970312 env[1455]: time="2025-03-17T18:53:57.970296682Z" level=info msg="cleaning up dead shim" Mar 17 18:53:57.984086 env[1455]: time="2025-03-17T18:53:57.984047671Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:53:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4498 runtime=io.containerd.runc.v2\n" Mar 17 18:53:58.322185 env[1455]: time="2025-03-17T18:53:58.322138636Z" level=info msg="CreateContainer within sandbox \"3259638fa2a3bd402039568a69a793079c23477b81ed3226ba0a73ce7a48eb55\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:53:58.371861 env[1455]: time="2025-03-17T18:53:58.371809001Z" level=info msg="CreateContainer within sandbox \"3259638fa2a3bd402039568a69a793079c23477b81ed3226ba0a73ce7a48eb55\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d8b57578c86fb7b0139df3e5a315ac7b8c7eb65e2533243e0347891a1948d39c\"" Mar 17 18:53:58.372608 env[1455]: time="2025-03-17T18:53:58.372564055Z" level=info msg="StartContainer for \"d8b57578c86fb7b0139df3e5a315ac7b8c7eb65e2533243e0347891a1948d39c\"" Mar 17 18:53:58.387004 systemd[1]: Started cri-containerd-d8b57578c86fb7b0139df3e5a315ac7b8c7eb65e2533243e0347891a1948d39c.scope. Mar 17 18:53:58.416173 env[1455]: time="2025-03-17T18:53:58.416125061Z" level=info msg="StartContainer for \"d8b57578c86fb7b0139df3e5a315ac7b8c7eb65e2533243e0347891a1948d39c\" returns successfully" Mar 17 18:53:58.420426 systemd[1]: cri-containerd-d8b57578c86fb7b0139df3e5a315ac7b8c7eb65e2533243e0347891a1948d39c.scope: Deactivated successfully. Mar 17 18:53:58.460803 env[1455]: time="2025-03-17T18:53:58.460759927Z" level=info msg="shim disconnected" id=d8b57578c86fb7b0139df3e5a315ac7b8c7eb65e2533243e0347891a1948d39c Mar 17 18:53:58.461077 env[1455]: time="2025-03-17T18:53:58.461057213Z" level=warning msg="cleaning up after shim disconnected" id=d8b57578c86fb7b0139df3e5a315ac7b8c7eb65e2533243e0347891a1948d39c namespace=k8s.io Mar 17 18:53:58.461180 env[1455]: time="2025-03-17T18:53:58.461165015Z" level=info msg="cleaning up dead shim" Mar 17 18:53:58.467894 env[1455]: time="2025-03-17T18:53:58.467860425Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:53:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4561 runtime=io.containerd.runc.v2\n" Mar 17 18:53:59.024250 kubelet[2514]: W0317 18:53:59.023446 2514 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf65017b6_dc54_4543_8304_b65c13f2daf7.slice/cri-containerd-c8ee1174dbcc7b76e7a797fcdaf9bb763207bef31fe544a5f23e3489ad1b3fd2.scope WatchSource:0}: container "c8ee1174dbcc7b76e7a797fcdaf9bb763207bef31fe544a5f23e3489ad1b3fd2" in namespace "k8s.io": not found Mar 17 18:53:59.325844 env[1455]: time="2025-03-17T18:53:59.325507956Z" level=info msg="CreateContainer within sandbox \"3259638fa2a3bd402039568a69a793079c23477b81ed3226ba0a73ce7a48eb55\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:53:59.355087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount469066119.mount: Deactivated successfully. Mar 17 18:53:59.368335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3049525709.mount: Deactivated successfully. Mar 17 18:53:59.382081 env[1455]: time="2025-03-17T18:53:59.382034326Z" level=info msg="CreateContainer within sandbox \"3259638fa2a3bd402039568a69a793079c23477b81ed3226ba0a73ce7a48eb55\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2e5228186a44d6a569eed4a4bf1a8e0b773f6ec36c2c121af74f53e4c2194b20\"" Mar 17 18:53:59.382887 env[1455]: time="2025-03-17T18:53:59.382862742Z" level=info msg="StartContainer for \"2e5228186a44d6a569eed4a4bf1a8e0b773f6ec36c2c121af74f53e4c2194b20\"" Mar 17 18:53:59.398568 systemd[1]: Started cri-containerd-2e5228186a44d6a569eed4a4bf1a8e0b773f6ec36c2c121af74f53e4c2194b20.scope. Mar 17 18:53:59.429507 systemd[1]: cri-containerd-2e5228186a44d6a569eed4a4bf1a8e0b773f6ec36c2c121af74f53e4c2194b20.scope: Deactivated successfully. Mar 17 18:53:59.436094 env[1455]: time="2025-03-17T18:53:59.436058009Z" level=info msg="StartContainer for \"2e5228186a44d6a569eed4a4bf1a8e0b773f6ec36c2c121af74f53e4c2194b20\" returns successfully" Mar 17 18:53:59.479474 env[1455]: time="2025-03-17T18:53:59.479429245Z" level=info msg="shim disconnected" id=2e5228186a44d6a569eed4a4bf1a8e0b773f6ec36c2c121af74f53e4c2194b20 Mar 17 18:53:59.479730 env[1455]: time="2025-03-17T18:53:59.479710251Z" level=warning msg="cleaning up after shim disconnected" id=2e5228186a44d6a569eed4a4bf1a8e0b773f6ec36c2c121af74f53e4c2194b20 namespace=k8s.io Mar 17 18:53:59.479797 env[1455]: time="2025-03-17T18:53:59.479784412Z" level=info msg="cleaning up dead shim" Mar 17 18:53:59.487283 env[1455]: time="2025-03-17T18:53:59.487244996Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:53:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4618 runtime=io.containerd.runc.v2\n" Mar 17 18:54:00.328392 env[1455]: time="2025-03-17T18:54:00.328351305Z" level=info msg="CreateContainer within sandbox \"3259638fa2a3bd402039568a69a793079c23477b81ed3226ba0a73ce7a48eb55\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:54:00.367637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount580531680.mount: Deactivated successfully. Mar 17 18:54:00.381757 env[1455]: time="2025-03-17T18:54:00.381713008Z" level=info msg="CreateContainer within sandbox \"3259638fa2a3bd402039568a69a793079c23477b81ed3226ba0a73ce7a48eb55\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e4e5ddd7a1c1f37398882cbcc70d245b678e77cc156a789229b2184619ad6f5c\"" Mar 17 18:54:00.383776 env[1455]: time="2025-03-17T18:54:00.382781749Z" level=info msg="StartContainer for \"e4e5ddd7a1c1f37398882cbcc70d245b678e77cc156a789229b2184619ad6f5c\"" Mar 17 18:54:00.399732 systemd[1]: Started cri-containerd-e4e5ddd7a1c1f37398882cbcc70d245b678e77cc156a789229b2184619ad6f5c.scope. Mar 17 18:54:00.423844 systemd[1]: cri-containerd-e4e5ddd7a1c1f37398882cbcc70d245b678e77cc156a789229b2184619ad6f5c.scope: Deactivated successfully. Mar 17 18:54:00.429010 env[1455]: time="2025-03-17T18:54:00.428958194Z" level=info msg="StartContainer for \"e4e5ddd7a1c1f37398882cbcc70d245b678e77cc156a789229b2184619ad6f5c\" returns successfully" Mar 17 18:54:00.476789 env[1455]: time="2025-03-17T18:54:00.476732830Z" level=info msg="shim disconnected" id=e4e5ddd7a1c1f37398882cbcc70d245b678e77cc156a789229b2184619ad6f5c Mar 17 18:54:00.477560 env[1455]: time="2025-03-17T18:54:00.477522966Z" level=warning msg="cleaning up after shim disconnected" id=e4e5ddd7a1c1f37398882cbcc70d245b678e77cc156a789229b2184619ad6f5c namespace=k8s.io Mar 17 18:54:00.477664 env[1455]: time="2025-03-17T18:54:00.477648368Z" level=info msg="cleaning up dead shim" Mar 17 18:54:00.484804 env[1455]: time="2025-03-17T18:54:00.484778145Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4675 runtime=io.containerd.runc.v2\n" Mar 17 18:54:00.530170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4e5ddd7a1c1f37398882cbcc70d245b678e77cc156a789229b2184619ad6f5c-rootfs.mount: Deactivated successfully. Mar 17 18:54:01.333596 env[1455]: time="2025-03-17T18:54:01.333542782Z" level=info msg="CreateContainer within sandbox \"3259638fa2a3bd402039568a69a793079c23477b81ed3226ba0a73ce7a48eb55\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:54:01.390559 env[1455]: time="2025-03-17T18:54:01.390504387Z" level=info msg="CreateContainer within sandbox \"3259638fa2a3bd402039568a69a793079c23477b81ed3226ba0a73ce7a48eb55\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e136797b2b1c190da669c2880bdb2a0897a68f1d10b9e948ec8ad8b6e5a99b2a\"" Mar 17 18:54:01.391288 env[1455]: time="2025-03-17T18:54:01.391261162Z" level=info msg="StartContainer for \"e136797b2b1c190da669c2880bdb2a0897a68f1d10b9e948ec8ad8b6e5a99b2a\"" Mar 17 18:54:01.409369 systemd[1]: Started cri-containerd-e136797b2b1c190da669c2880bdb2a0897a68f1d10b9e948ec8ad8b6e5a99b2a.scope. Mar 17 18:54:01.450750 env[1455]: time="2025-03-17T18:54:01.450697615Z" level=info msg="StartContainer for \"e136797b2b1c190da669c2880bdb2a0897a68f1d10b9e948ec8ad8b6e5a99b2a\" returns successfully" Mar 17 18:54:01.939821 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Mar 17 18:54:02.133072 kubelet[2514]: W0317 18:54:02.133028 2514 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1362699_a1fe_4d96_bb53_2ad2e65daacc.slice/cri-containerd-135c97f6f63bfefe56ff1736970a673bf19c38ef19bbf31690222a5c832ff930.scope WatchSource:0}: task 135c97f6f63bfefe56ff1736970a673bf19c38ef19bbf31690222a5c832ff930 not found: not found Mar 17 18:54:03.338119 systemd[1]: run-containerd-runc-k8s.io-e136797b2b1c190da669c2880bdb2a0897a68f1d10b9e948ec8ad8b6e5a99b2a-runc.FfUS3a.mount: Deactivated successfully. Mar 17 18:54:04.647684 systemd-networkd[1607]: lxc_health: Link UP Mar 17 18:54:04.664394 systemd-networkd[1607]: lxc_health: Gained carrier Mar 17 18:54:04.664820 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:54:05.240115 kubelet[2514]: W0317 18:54:05.240001 2514 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1362699_a1fe_4d96_bb53_2ad2e65daacc.slice/cri-containerd-d8b57578c86fb7b0139df3e5a315ac7b8c7eb65e2533243e0347891a1948d39c.scope WatchSource:0}: task d8b57578c86fb7b0139df3e5a315ac7b8c7eb65e2533243e0347891a1948d39c not found: not found Mar 17 18:54:05.488050 systemd[1]: run-containerd-runc-k8s.io-e136797b2b1c190da669c2880bdb2a0897a68f1d10b9e948ec8ad8b6e5a99b2a-runc.tcApaE.mount: Deactivated successfully. Mar 17 18:54:05.702197 kubelet[2514]: I0317 18:54:05.702149 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sff2s" podStartSLOduration=8.702115735 podStartE2EDuration="8.702115735s" podCreationTimestamp="2025-03-17 18:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:54:02.356815084 +0000 UTC m=+220.634859949" watchObservedRunningTime="2025-03-17 18:54:05.702115735 +0000 UTC m=+223.980160560" Mar 17 18:54:06.226713 systemd-networkd[1607]: lxc_health: Gained IPv6LL Mar 17 18:54:07.677825 systemd[1]: run-containerd-runc-k8s.io-e136797b2b1c190da669c2880bdb2a0897a68f1d10b9e948ec8ad8b6e5a99b2a-runc.MXPSEF.mount: Deactivated successfully. Mar 17 18:54:08.346896 kubelet[2514]: W0317 18:54:08.346702 2514 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1362699_a1fe_4d96_bb53_2ad2e65daacc.slice/cri-containerd-2e5228186a44d6a569eed4a4bf1a8e0b773f6ec36c2c121af74f53e4c2194b20.scope WatchSource:0}: task 2e5228186a44d6a569eed4a4bf1a8e0b773f6ec36c2c121af74f53e4c2194b20 not found: not found Mar 17 18:54:09.799812 systemd[1]: run-containerd-runc-k8s.io-e136797b2b1c190da669c2880bdb2a0897a68f1d10b9e948ec8ad8b6e5a99b2a-runc.bDLIlS.mount: Deactivated successfully. Mar 17 18:54:11.452746 kubelet[2514]: W0317 18:54:11.452707 2514 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1362699_a1fe_4d96_bb53_2ad2e65daacc.slice/cri-containerd-e4e5ddd7a1c1f37398882cbcc70d245b678e77cc156a789229b2184619ad6f5c.scope WatchSource:0}: task e4e5ddd7a1c1f37398882cbcc70d245b678e77cc156a789229b2184619ad6f5c not found: not found Mar 17 18:54:11.924328 systemd[1]: run-containerd-runc-k8s.io-e136797b2b1c190da669c2880bdb2a0897a68f1d10b9e948ec8ad8b6e5a99b2a-runc.mFPLbg.mount: Deactivated successfully. Mar 17 18:54:12.079847 sshd[4394]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:12.082406 systemd[1]: sshd@24-10.200.20.36:22-10.200.16.10:47626.service: Deactivated successfully. Mar 17 18:54:12.083134 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 18:54:12.083698 systemd-logind[1442]: Session 27 logged out. Waiting for processes to exit. Mar 17 18:54:12.084750 systemd-logind[1442]: Removed session 27.