Mar 17 18:48:14.039240 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 18:48:14.039264 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Mar 17 17:11:44 -00 2025 Mar 17 18:48:14.039273 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 17 18:48:14.039280 kernel: printk: bootconsole [pl11] enabled Mar 17 18:48:14.039287 kernel: efi: EFI v2.70 by EDK II Mar 17 18:48:14.039292 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3763cf98 Mar 17 18:48:14.039299 kernel: random: crng init done Mar 17 18:48:14.039305 kernel: ACPI: Early table checksum verification disabled Mar 17 18:48:14.039310 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 17 18:48:14.039316 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:14.039321 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:14.039327 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 17 18:48:14.039333 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:14.039339 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:14.039346 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:14.039352 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:14.039358 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:14.039365 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:14.039371 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 17 18:48:14.039376 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 18:48:14.039382 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 17 18:48:14.039388 kernel: NUMA: Failed to initialise from firmware Mar 17 18:48:14.039394 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Mar 17 18:48:14.039399 kernel: NUMA: NODE_DATA [mem 0x1bf7f3900-0x1bf7f8fff] Mar 17 18:48:14.039405 kernel: Zone ranges: Mar 17 18:48:14.039411 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 17 18:48:14.039417 kernel: DMA32 empty Mar 17 18:48:14.039423 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 17 18:48:14.039429 kernel: Movable zone start for each node Mar 17 18:48:14.039435 kernel: Early memory node ranges Mar 17 18:48:14.039441 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 17 18:48:14.039446 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 17 18:48:14.039452 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 17 18:48:14.039457 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 17 18:48:14.039463 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 17 18:48:14.039469 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 17 18:48:14.039475 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 17 18:48:14.039481 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 17 18:48:14.039486 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 17 18:48:14.039492 kernel: psci: probing for conduit method from ACPI. Mar 17 18:48:14.039502 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 18:48:14.039508 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 18:48:14.039514 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 17 18:48:14.039520 kernel: psci: SMC Calling Convention v1.4 Mar 17 18:48:14.039526 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Mar 17 18:48:14.039533 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Mar 17 18:48:14.039539 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Mar 17 18:48:14.039545 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Mar 17 18:48:14.039551 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 18:48:14.039557 kernel: Detected PIPT I-cache on CPU0 Mar 17 18:48:14.039563 kernel: CPU features: detected: GIC system register CPU interface Mar 17 18:48:14.039569 kernel: CPU features: detected: Hardware dirty bit management Mar 17 18:48:14.039575 kernel: CPU features: detected: Spectre-BHB Mar 17 18:48:14.039581 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 18:48:14.039587 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 18:48:14.039593 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 18:48:14.039601 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 17 18:48:14.039607 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 18:48:14.039613 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 17 18:48:14.039619 kernel: Policy zone: Normal Mar 17 18:48:14.039627 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:48:14.039633 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:48:14.039639 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:48:14.039646 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:48:14.039651 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:48:14.039657 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Mar 17 18:48:14.039664 kernel: Memory: 3986944K/4194160K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 207216K reserved, 0K cma-reserved) Mar 17 18:48:14.039672 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 18:48:14.039678 kernel: trace event string verifier disabled Mar 17 18:48:14.039684 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 18:48:14.039690 kernel: rcu: RCU event tracing is enabled. Mar 17 18:48:14.039697 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 18:48:14.039703 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 18:48:14.039709 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:48:14.039715 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:48:14.039721 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 18:48:14.039728 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 18:48:14.039734 kernel: GICv3: 960 SPIs implemented Mar 17 18:48:14.039742 kernel: GICv3: 0 Extended SPIs implemented Mar 17 18:48:14.039748 kernel: GICv3: Distributor has no Range Selector support Mar 17 18:48:14.039754 kernel: Root IRQ handler: gic_handle_irq Mar 17 18:48:14.039760 kernel: GICv3: 16 PPIs implemented Mar 17 18:48:14.039767 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 17 18:48:14.039796 kernel: ITS: No ITS available, not enabling LPIs Mar 17 18:48:14.039803 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:48:14.039810 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 18:48:14.039816 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 18:48:14.039823 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 18:48:14.039829 kernel: Console: colour dummy device 80x25 Mar 17 18:48:14.039837 kernel: printk: console [tty1] enabled Mar 17 18:48:14.039844 kernel: ACPI: Core revision 20210730 Mar 17 18:48:14.039851 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 18:48:14.039857 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:48:14.039863 kernel: LSM: Security Framework initializing Mar 17 18:48:14.039869 kernel: SELinux: Initializing. Mar 17 18:48:14.039876 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:48:14.039882 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:48:14.039888 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Mar 17 18:48:14.039896 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Mar 17 18:48:14.039902 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:48:14.039908 kernel: Remapping and enabling EFI services. Mar 17 18:48:14.039915 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:48:14.039921 kernel: Detected PIPT I-cache on CPU1 Mar 17 18:48:14.039928 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 17 18:48:14.039934 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:48:14.039940 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 18:48:14.039947 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 18:48:14.039953 kernel: SMP: Total of 2 processors activated. Mar 17 18:48:14.039960 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 18:48:14.039967 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 17 18:48:14.039973 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 18:48:14.039980 kernel: CPU features: detected: CRC32 instructions Mar 17 18:48:14.039986 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 18:48:14.039992 kernel: CPU features: detected: LSE atomic instructions Mar 17 18:48:14.039998 kernel: CPU features: detected: Privileged Access Never Mar 17 18:48:14.040005 kernel: CPU: All CPU(s) started at EL1 Mar 17 18:48:14.040011 kernel: alternatives: patching kernel code Mar 17 18:48:14.040019 kernel: devtmpfs: initialized Mar 17 18:48:14.040029 kernel: KASLR enabled Mar 17 18:48:14.040036 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:48:14.040044 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 18:48:14.040051 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:48:14.040057 kernel: SMBIOS 3.1.0 present. Mar 17 18:48:14.040064 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 17 18:48:14.040070 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:48:14.040077 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 18:48:14.040085 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 18:48:14.040092 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 18:48:14.040099 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:48:14.040105 kernel: audit: type=2000 audit(0.091:1): state=initialized audit_enabled=0 res=1 Mar 17 18:48:14.040112 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:48:14.040119 kernel: cpuidle: using governor menu Mar 17 18:48:14.040125 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 18:48:14.040133 kernel: ASID allocator initialised with 32768 entries Mar 17 18:48:14.040140 kernel: ACPI: bus type PCI registered Mar 17 18:48:14.040146 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:48:14.040153 kernel: Serial: AMBA PL011 UART driver Mar 17 18:48:14.040159 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:48:14.040166 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 18:48:14.040173 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:48:14.040179 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 18:48:14.040186 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:48:14.040194 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 18:48:14.040200 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:48:14.040207 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:48:14.040214 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:48:14.040221 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:48:14.040227 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:48:14.040234 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:48:14.040240 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:48:14.040247 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:48:14.040255 kernel: ACPI: Interpreter enabled Mar 17 18:48:14.040261 kernel: ACPI: Using GIC for interrupt routing Mar 17 18:48:14.040268 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 17 18:48:14.040275 kernel: printk: console [ttyAMA0] enabled Mar 17 18:48:14.040281 kernel: printk: bootconsole [pl11] disabled Mar 17 18:48:14.040288 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 17 18:48:14.040294 kernel: iommu: Default domain type: Translated Mar 17 18:48:14.040301 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 18:48:14.040308 kernel: vgaarb: loaded Mar 17 18:48:14.040314 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:48:14.040322 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:48:14.040329 kernel: PTP clock support registered Mar 17 18:48:14.040335 kernel: Registered efivars operations Mar 17 18:48:14.040341 kernel: No ACPI PMU IRQ for CPU0 Mar 17 18:48:14.040348 kernel: No ACPI PMU IRQ for CPU1 Mar 17 18:48:14.040354 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 18:48:14.040361 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:48:14.040368 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:48:14.040375 kernel: pnp: PnP ACPI init Mar 17 18:48:14.040382 kernel: pnp: PnP ACPI: found 0 devices Mar 17 18:48:14.040388 kernel: NET: Registered PF_INET protocol family Mar 17 18:48:14.040395 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:48:14.040402 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:48:14.040408 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:48:14.040415 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:48:14.040422 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 18:48:14.040428 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:48:14.040437 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:48:14.040443 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:48:14.040450 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:48:14.040457 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:48:14.040463 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Mar 17 18:48:14.040470 kernel: kvm [1]: HYP mode not available Mar 17 18:48:14.040476 kernel: Initialise system trusted keyrings Mar 17 18:48:14.040483 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:48:14.040489 kernel: Key type asymmetric registered Mar 17 18:48:14.040497 kernel: Asymmetric key parser 'x509' registered Mar 17 18:48:14.040503 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:48:14.040510 kernel: io scheduler mq-deadline registered Mar 17 18:48:14.040517 kernel: io scheduler kyber registered Mar 17 18:48:14.040523 kernel: io scheduler bfq registered Mar 17 18:48:14.040530 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:48:14.040536 kernel: thunder_xcv, ver 1.0 Mar 17 18:48:14.040542 kernel: thunder_bgx, ver 1.0 Mar 17 18:48:14.040549 kernel: nicpf, ver 1.0 Mar 17 18:48:14.040555 kernel: nicvf, ver 1.0 Mar 17 18:48:14.040691 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 18:48:14.040752 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T18:48:13 UTC (1742237293) Mar 17 18:48:14.040761 kernel: efifb: probing for efifb Mar 17 18:48:14.040768 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 17 18:48:14.040790 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 17 18:48:14.040796 kernel: efifb: scrolling: redraw Mar 17 18:48:14.040804 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 17 18:48:14.040813 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 18:48:14.040820 kernel: fb0: EFI VGA frame buffer device Mar 17 18:48:14.040826 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 17 18:48:14.040833 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 18:48:14.040839 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:48:14.040846 kernel: Segment Routing with IPv6 Mar 17 18:48:14.040853 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:48:14.040859 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:48:14.040865 kernel: Key type dns_resolver registered Mar 17 18:48:14.040872 kernel: registered taskstats version 1 Mar 17 18:48:14.040880 kernel: Loading compiled-in X.509 certificates Mar 17 18:48:14.040887 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: c6f3fb83dc6bb7052b07ec5b1ef41d12f9b3f7e4' Mar 17 18:48:14.040893 kernel: Key type .fscrypt registered Mar 17 18:48:14.040900 kernel: Key type fscrypt-provisioning registered Mar 17 18:48:14.040907 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:48:14.040914 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:48:14.040920 kernel: ima: No architecture policies found Mar 17 18:48:14.040927 kernel: clk: Disabling unused clocks Mar 17 18:48:14.040934 kernel: Freeing unused kernel memory: 36416K Mar 17 18:48:14.040941 kernel: Run /init as init process Mar 17 18:48:14.040947 kernel: with arguments: Mar 17 18:48:14.040954 kernel: /init Mar 17 18:48:14.040960 kernel: with environment: Mar 17 18:48:14.040966 kernel: HOME=/ Mar 17 18:48:14.040973 kernel: TERM=linux Mar 17 18:48:14.040979 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:48:14.040988 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:48:14.040998 systemd[1]: Detected virtualization microsoft. Mar 17 18:48:14.041005 systemd[1]: Detected architecture arm64. Mar 17 18:48:14.041012 systemd[1]: Running in initrd. Mar 17 18:48:14.041019 systemd[1]: No hostname configured, using default hostname. Mar 17 18:48:14.041026 systemd[1]: Hostname set to . Mar 17 18:48:14.041033 systemd[1]: Initializing machine ID from random generator. Mar 17 18:48:14.041040 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:48:14.041048 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:48:14.041055 systemd[1]: Reached target cryptsetup.target. Mar 17 18:48:14.041062 systemd[1]: Reached target paths.target. Mar 17 18:48:14.041069 systemd[1]: Reached target slices.target. Mar 17 18:48:14.041076 systemd[1]: Reached target swap.target. Mar 17 18:48:14.041083 systemd[1]: Reached target timers.target. Mar 17 18:48:14.041090 systemd[1]: Listening on iscsid.socket. Mar 17 18:48:14.041097 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:48:14.041106 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:48:14.041113 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:48:14.041120 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:48:14.041127 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:48:14.041134 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:48:14.041141 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:48:14.041148 systemd[1]: Reached target sockets.target. Mar 17 18:48:14.041155 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:48:14.041162 systemd[1]: Finished network-cleanup.service. Mar 17 18:48:14.041170 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:48:14.041177 systemd[1]: Starting systemd-journald.service... Mar 17 18:48:14.041184 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:48:14.041191 systemd[1]: Starting systemd-resolved.service... Mar 17 18:48:14.041204 systemd-journald[276]: Journal started Mar 17 18:48:14.041245 systemd-journald[276]: Runtime Journal (/run/log/journal/0d8f6da313104f43a1c438c054a25ef7) is 8.0M, max 78.5M, 70.5M free. Mar 17 18:48:14.031944 systemd-modules-load[277]: Inserted module 'overlay' Mar 17 18:48:14.077166 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:48:14.077223 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:48:14.068151 systemd-resolved[278]: Positive Trust Anchors: Mar 17 18:48:14.068159 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:48:14.106425 kernel: Bridge firewalling registered Mar 17 18:48:14.106447 systemd[1]: Started systemd-journald.service. Mar 17 18:48:14.068189 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:48:14.155394 kernel: SCSI subsystem initialized Mar 17 18:48:14.155417 kernel: audit: type=1130 audit(1742237294.153:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:14.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:14.070356 systemd-resolved[278]: Defaulting to hostname 'linux'. Mar 17 18:48:14.203654 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:48:14.203679 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:48:14.203688 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:48:14.101555 systemd-modules-load[277]: Inserted module 'br_netfilter' Mar 17 18:48:14.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:14.154486 systemd[1]: Started systemd-resolved.service. Mar 17 18:48:14.249435 kernel: audit: type=1130 audit(1742237294.204:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:14.249464 kernel: audit: type=1130 audit(1742237294.229:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:14.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:14.203878 systemd-modules-load[277]: Inserted module 'dm_multipath' Mar 17 18:48:14.281579 kernel: audit: type=1130 audit(1742237294.255:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:14.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:14.205112 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:48:14.229882 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:48:14.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:14.256132 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:48:14.338923 kernel: audit: type=1130 audit(1742237294.281:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:14.338950 kernel: audit: type=1130 audit(1742237294.308:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:14.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:14.282046 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:48:14.309138 systemd[1]: Reached target nss-lookup.target. Mar 17 18:48:14.333941 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:48:14.344231 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:48:14.359799 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:48:14.376003 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:48:14.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:14.392036 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:48:14.413417 kernel: audit: type=1130 audit(1742237294.390:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:14.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:14.418540 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:48:14.439984 kernel: audit: type=1130 audit(1742237294.417:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:14.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:14.447697 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:48:14.475404 kernel: audit: type=1130 audit(1742237294.444:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:14.475478 dracut-cmdline[298]: dracut-dracut-053 Mar 17 18:48:14.475478 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=t Mar 17 18:48:14.475478 dracut-cmdline[298]: tyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:48:14.556792 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:48:14.572798 kernel: iscsi: registered transport (tcp) Mar 17 18:48:14.593690 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:48:14.593710 kernel: QLogic iSCSI HBA Driver Mar 17 18:48:14.622863 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:48:14.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:14.628460 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:48:14.682791 kernel: raid6: neonx8 gen() 13826 MB/s Mar 17 18:48:14.703787 kernel: raid6: neonx8 xor() 10833 MB/s Mar 17 18:48:14.724781 kernel: raid6: neonx4 gen() 13551 MB/s Mar 17 18:48:14.746780 kernel: raid6: neonx4 xor() 11311 MB/s Mar 17 18:48:14.766782 kernel: raid6: neonx2 gen() 12963 MB/s Mar 17 18:48:14.786781 kernel: raid6: neonx2 xor() 10235 MB/s Mar 17 18:48:14.807782 kernel: raid6: neonx1 gen() 10558 MB/s Mar 17 18:48:14.828780 kernel: raid6: neonx1 xor() 8816 MB/s Mar 17 18:48:14.849780 kernel: raid6: int64x8 gen() 6275 MB/s Mar 17 18:48:14.870783 kernel: raid6: int64x8 xor() 3536 MB/s Mar 17 18:48:14.891786 kernel: raid6: int64x4 gen() 7236 MB/s Mar 17 18:48:14.911804 kernel: raid6: int64x4 xor() 3855 MB/s Mar 17 18:48:14.932780 kernel: raid6: int64x2 gen() 6149 MB/s Mar 17 18:48:14.952780 kernel: raid6: int64x2 xor() 3320 MB/s Mar 17 18:48:14.972782 kernel: raid6: int64x1 gen() 5049 MB/s Mar 17 18:48:14.998345 kernel: raid6: int64x1 xor() 2651 MB/s Mar 17 18:48:14.998355 kernel: raid6: using algorithm neonx8 gen() 13826 MB/s Mar 17 18:48:14.998363 kernel: raid6: .... xor() 10833 MB/s, rmw enabled Mar 17 18:48:15.003189 kernel: raid6: using neon recovery algorithm Mar 17 18:48:15.023968 kernel: xor: measuring software checksum speed Mar 17 18:48:15.023980 kernel: 8regs : 17249 MB/sec Mar 17 18:48:15.027818 kernel: 32regs : 20665 MB/sec Mar 17 18:48:15.031870 kernel: arm64_neon : 27757 MB/sec Mar 17 18:48:15.037099 kernel: xor: using function: arm64_neon (27757 MB/sec) Mar 17 18:48:15.093787 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Mar 17 18:48:15.103065 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:48:15.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:15.110000 audit: BPF prog-id=7 op=LOAD Mar 17 18:48:15.111000 audit: BPF prog-id=8 op=LOAD Mar 17 18:48:15.112247 systemd[1]: Starting systemd-udevd.service... Mar 17 18:48:15.130568 systemd-udevd[474]: Using default interface naming scheme 'v252'. Mar 17 18:48:15.137872 systemd[1]: Started systemd-udevd.service. Mar 17 18:48:15.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:15.148677 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:48:15.164351 dracut-pre-trigger[495]: rd.md=0: removing MD RAID activation Mar 17 18:48:15.198016 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:48:15.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:15.208719 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:48:15.239009 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:48:15.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:15.284793 kernel: hv_vmbus: Vmbus version:5.3 Mar 17 18:48:15.303505 kernel: hv_vmbus: registering driver hid_hyperv Mar 17 18:48:15.303554 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 17 18:48:15.304785 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 17 18:48:15.304820 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 17 18:48:15.304830 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 17 18:48:15.306788 kernel: hv_vmbus: registering driver hv_storvsc Mar 17 18:48:15.343791 kernel: hv_vmbus: registering driver hv_netvsc Mar 17 18:48:15.343846 kernel: scsi host1: storvsc_host_t Mar 17 18:48:15.354679 kernel: scsi host0: storvsc_host_t Mar 17 18:48:15.370068 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 17 18:48:15.370158 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Mar 17 18:48:15.396538 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 17 18:48:15.407909 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 18:48:15.407924 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 17 18:48:15.429120 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 17 18:48:15.429232 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 17 18:48:15.429312 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 17 18:48:15.429389 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 17 18:48:15.429479 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 17 18:48:15.429555 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:48:15.429565 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 17 18:48:15.449797 kernel: hv_netvsc 002248bd-a39a-0022-48bd-a39a002248bd eth0: VF slot 1 added Mar 17 18:48:15.459795 kernel: hv_vmbus: registering driver hv_pci Mar 17 18:48:15.471557 kernel: hv_pci 46e9f0f6-f3e1-470f-90dd-e63935c1cdff: PCI VMBus probing: Using version 0x10004 Mar 17 18:48:15.572423 kernel: hv_pci 46e9f0f6-f3e1-470f-90dd-e63935c1cdff: PCI host bridge to bus f3e1:00 Mar 17 18:48:15.572524 kernel: pci_bus f3e1:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 17 18:48:15.572613 kernel: pci_bus f3e1:00: No busn resource found for root bus, will use [bus 00-ff] Mar 17 18:48:15.572682 kernel: pci f3e1:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 17 18:48:15.572800 kernel: pci f3e1:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 17 18:48:15.572885 kernel: pci f3e1:00:02.0: enabling Extended Tags Mar 17 18:48:15.572962 kernel: pci f3e1:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f3e1:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 17 18:48:15.573038 kernel: pci_bus f3e1:00: busn_res: [bus 00-ff] end is updated to 00 Mar 17 18:48:15.573107 kernel: pci f3e1:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 17 18:48:15.609791 kernel: mlx5_core f3e1:00:02.0: firmware version: 16.30.1284 Mar 17 18:48:15.828051 kernel: mlx5_core f3e1:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Mar 17 18:48:15.828173 kernel: hv_netvsc 002248bd-a39a-0022-48bd-a39a002248bd eth0: VF registering: eth1 Mar 17 18:48:15.828255 kernel: mlx5_core f3e1:00:02.0 eth1: joined to eth0 Mar 17 18:48:15.836788 kernel: mlx5_core f3e1:00:02.0 enP62433s1: renamed from eth1 Mar 17 18:48:15.894021 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:48:15.921793 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (531) Mar 17 18:48:15.935260 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:48:16.094194 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:48:16.103168 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:48:16.109950 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:48:16.121898 systemd[1]: Starting disk-uuid.service... Mar 17 18:48:16.147800 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:48:16.153789 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:48:17.162477 disk-uuid[603]: The operation has completed successfully. Mar 17 18:48:17.167932 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:48:17.226312 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:48:17.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.226407 systemd[1]: Finished disk-uuid.service. Mar 17 18:48:17.231839 systemd[1]: Starting verity-setup.service... Mar 17 18:48:17.279800 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 18:48:17.490486 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:48:17.501314 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:48:17.508882 systemd[1]: Finished verity-setup.service. Mar 17 18:48:17.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.564793 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:48:17.564844 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:48:17.568848 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:48:17.569597 systemd[1]: Starting ignition-setup.service... Mar 17 18:48:17.577583 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:48:17.615371 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:48:17.615419 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:48:17.621358 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:48:17.678537 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:48:17.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.687000 audit: BPF prog-id=9 op=LOAD Mar 17 18:48:17.688342 systemd[1]: Starting systemd-networkd.service... Mar 17 18:48:17.704079 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:48:17.717181 systemd-networkd[845]: lo: Link UP Mar 17 18:48:17.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.717188 systemd-networkd[845]: lo: Gained carrier Mar 17 18:48:17.717579 systemd-networkd[845]: Enumeration completed Mar 17 18:48:17.717660 systemd[1]: Started systemd-networkd.service. Mar 17 18:48:17.722672 systemd[1]: Reached target network.target. Mar 17 18:48:17.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.730498 systemd-networkd[845]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:48:17.775588 iscsid[853]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:48:17.775588 iscsid[853]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Mar 17 18:48:17.775588 iscsid[853]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:48:17.775588 iscsid[853]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:48:17.775588 iscsid[853]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:48:17.775588 iscsid[853]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:48:17.775588 iscsid[853]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:48:17.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.735340 systemd[1]: Starting iscsiuio.service... Mar 17 18:48:17.746568 systemd[1]: Started iscsiuio.service. Mar 17 18:48:17.761489 systemd[1]: Starting iscsid.service... Mar 17 18:48:17.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.776230 systemd[1]: Started iscsid.service. Mar 17 18:48:17.812936 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:48:17.838792 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:48:17.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:17.844111 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:48:17.855885 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:48:17.861403 systemd[1]: Reached target remote-fs.target. Mar 17 18:48:17.874856 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:48:17.891490 systemd[1]: Finished ignition-setup.service. Mar 17 18:48:17.905354 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:48:17.915209 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:48:17.969790 kernel: mlx5_core f3e1:00:02.0 enP62433s1: Link up Mar 17 18:48:18.009784 kernel: hv_netvsc 002248bd-a39a-0022-48bd-a39a002248bd eth0: Data path switched to VF: enP62433s1 Mar 17 18:48:18.009951 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:48:18.015107 systemd-networkd[845]: enP62433s1: Link UP Mar 17 18:48:18.015930 systemd-networkd[845]: eth0: Link UP Mar 17 18:48:18.016079 systemd-networkd[845]: eth0: Gained carrier Mar 17 18:48:18.029292 systemd-networkd[845]: enP62433s1: Gained carrier Mar 17 18:48:18.041845 systemd-networkd[845]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 18:48:19.647957 systemd-networkd[845]: eth0: Gained IPv6LL Mar 17 18:48:20.082066 ignition[868]: Ignition 2.14.0 Mar 17 18:48:20.082081 ignition[868]: Stage: fetch-offline Mar 17 18:48:20.082155 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:20.082177 ignition[868]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:20.150228 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:20.150394 ignition[868]: parsed url from cmdline: "" Mar 17 18:48:20.150398 ignition[868]: no config URL provided Mar 17 18:48:20.150403 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:48:20.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:20.163101 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:48:20.214829 kernel: kauditd_printk_skb: 18 callbacks suppressed Mar 17 18:48:20.214853 kernel: audit: type=1130 audit(1742237300.171:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:20.150411 ignition[868]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:48:20.172915 systemd[1]: Starting ignition-fetch.service... Mar 17 18:48:20.150416 ignition[868]: failed to fetch config: resource requires networking Mar 17 18:48:20.150896 ignition[868]: Ignition finished successfully Mar 17 18:48:20.206583 ignition[875]: Ignition 2.14.0 Mar 17 18:48:20.206589 ignition[875]: Stage: fetch Mar 17 18:48:20.206690 ignition[875]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:20.206710 ignition[875]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:20.213982 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:20.219677 ignition[875]: parsed url from cmdline: "" Mar 17 18:48:20.219685 ignition[875]: no config URL provided Mar 17 18:48:20.219708 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:48:20.219721 ignition[875]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:48:20.219754 ignition[875]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 17 18:48:20.297165 ignition[875]: GET result: OK Mar 17 18:48:20.297260 ignition[875]: config has been read from IMDS userdata Mar 17 18:48:20.300396 unknown[875]: fetched base config from "system" Mar 17 18:48:20.297337 ignition[875]: parsing config with SHA512: 782a1236267965e2c6c47b2ed0ef5826adc70df7e4a4f6c08cbe60173fe7c2c8a87bfed907ce495094112c806c5dfbdc2689a2487c19de4cba3fa59c4290fb0a Mar 17 18:48:20.300403 unknown[875]: fetched base config from "system" Mar 17 18:48:20.337842 kernel: audit: type=1130 audit(1742237300.314:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:20.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:20.300985 ignition[875]: fetch: fetch complete Mar 17 18:48:20.300408 unknown[875]: fetched user config from "azure" Mar 17 18:48:20.300990 ignition[875]: fetch: fetch passed Mar 17 18:48:20.307046 systemd[1]: Finished ignition-fetch.service. Mar 17 18:48:20.301029 ignition[875]: Ignition finished successfully Mar 17 18:48:20.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:20.315684 systemd[1]: Starting ignition-kargs.service... Mar 17 18:48:20.345105 ignition[881]: Ignition 2.14.0 Mar 17 18:48:20.386917 kernel: audit: type=1130 audit(1742237300.358:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:20.354605 systemd[1]: Finished ignition-kargs.service. Mar 17 18:48:20.345112 ignition[881]: Stage: kargs Mar 17 18:48:20.382875 systemd[1]: Starting ignition-disks.service... Mar 17 18:48:20.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:20.345227 ignition[881]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:20.439784 kernel: audit: type=1130 audit(1742237300.407:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:20.403839 systemd[1]: Finished ignition-disks.service. Mar 17 18:48:20.345248 ignition[881]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:20.426744 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:48:20.348503 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:20.433621 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:48:20.351520 ignition[881]: kargs: kargs passed Mar 17 18:48:20.444563 systemd[1]: Reached target local-fs.target. Mar 17 18:48:20.351585 ignition[881]: Ignition finished successfully Mar 17 18:48:20.452951 systemd[1]: Reached target sysinit.target. Mar 17 18:48:20.394446 ignition[887]: Ignition 2.14.0 Mar 17 18:48:20.461736 systemd[1]: Reached target basic.target. Mar 17 18:48:20.394452 ignition[887]: Stage: disks Mar 17 18:48:20.472658 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:48:20.394577 ignition[887]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:20.394598 ignition[887]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:20.398383 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:20.400474 ignition[887]: disks: disks passed Mar 17 18:48:20.400551 ignition[887]: Ignition finished successfully Mar 17 18:48:20.577398 systemd-fsck[895]: ROOT: clean, 623/7326000 files, 481077/7359488 blocks Mar 17 18:48:20.587990 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:48:20.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:20.593674 systemd[1]: Mounting sysroot.mount... Mar 17 18:48:20.618898 kernel: audit: type=1130 audit(1742237300.592:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:20.634813 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:48:20.635192 systemd[1]: Mounted sysroot.mount. Mar 17 18:48:20.639147 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:48:20.675059 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:48:20.679720 systemd[1]: Starting flatcar-metadata-hostname.service... Mar 17 18:48:20.687734 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:48:20.687767 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:48:20.694164 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:48:20.773579 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:48:20.779180 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:48:20.802799 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (905) Mar 17 18:48:20.802843 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:48:20.809679 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:48:20.828341 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:48:20.828363 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:48:20.837686 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:48:20.848965 initrd-setup-root[936]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:48:20.872173 initrd-setup-root[944]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:48:20.882198 initrd-setup-root[952]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:48:21.303877 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:48:21.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:21.309454 systemd[1]: Starting ignition-mount.service... Mar 17 18:48:21.341641 kernel: audit: type=1130 audit(1742237301.308:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:21.332887 systemd[1]: Starting sysroot-boot.service... Mar 17 18:48:21.339906 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Mar 17 18:48:21.340066 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Mar 17 18:48:21.378122 systemd[1]: Finished sysroot-boot.service. Mar 17 18:48:21.426157 kernel: audit: type=1130 audit(1742237301.387:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:21.426181 kernel: audit: type=1130 audit(1742237301.410:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:21.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:21.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:21.426256 ignition[975]: INFO : Ignition 2.14.0 Mar 17 18:48:21.426256 ignition[975]: INFO : Stage: mount Mar 17 18:48:21.426256 ignition[975]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:21.426256 ignition[975]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:21.426256 ignition[975]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:21.426256 ignition[975]: INFO : mount: mount passed Mar 17 18:48:21.426256 ignition[975]: INFO : Ignition finished successfully Mar 17 18:48:21.388142 systemd[1]: Finished ignition-mount.service. Mar 17 18:48:21.996965 coreos-metadata[904]: Mar 17 18:48:21.996 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 17 18:48:22.008110 coreos-metadata[904]: Mar 17 18:48:22.008 INFO Fetch successful Mar 17 18:48:22.041580 coreos-metadata[904]: Mar 17 18:48:22.041 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 17 18:48:22.065808 coreos-metadata[904]: Mar 17 18:48:22.065 INFO Fetch successful Mar 17 18:48:22.080355 coreos-metadata[904]: Mar 17 18:48:22.080 INFO wrote hostname ci-3510.3.7-a-2597755324 to /sysroot/etc/hostname Mar 17 18:48:22.090073 systemd[1]: Finished flatcar-metadata-hostname.service. Mar 17 18:48:22.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:22.096437 systemd[1]: Starting ignition-files.service... Mar 17 18:48:22.125542 kernel: audit: type=1130 audit(1742237302.095:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:22.124121 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:48:22.144835 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (984) Mar 17 18:48:22.157159 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:48:22.157183 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:48:22.157193 kernel: BTRFS info (device sda6): has skinny extents Mar 17 18:48:22.166273 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:48:22.180709 ignition[1003]: INFO : Ignition 2.14.0 Mar 17 18:48:22.180709 ignition[1003]: INFO : Stage: files Mar 17 18:48:22.192037 ignition[1003]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:22.192037 ignition[1003]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:22.192037 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:22.192037 ignition[1003]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:48:22.226352 ignition[1003]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:48:22.226352 ignition[1003]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:48:22.279512 ignition[1003]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:48:22.287427 ignition[1003]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:48:22.295719 ignition[1003]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:48:22.295233 unknown[1003]: wrote ssh authorized keys file for user: core Mar 17 18:48:22.309467 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 18:48:22.309467 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 18:48:22.309467 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 18:48:22.309467 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 18:48:22.421016 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 18:48:22.564861 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 18:48:22.576090 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:48:22.576090 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 18:48:23.062063 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Mar 17 18:48:23.260386 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:48:23.270581 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:48:23.270581 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:48:23.270581 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:48:23.270581 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:48:23.270581 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:48:23.270581 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:48:23.270581 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:48:23.270581 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:48:23.270581 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:48:23.270581 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:48:23.270581 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:48:23.270581 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:48:23.400995 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Mar 17 18:48:23.400995 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(c): oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:48:23.400995 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3344549177" Mar 17 18:48:23.400995 ignition[1003]: CRITICAL : files: createFilesystemsFiles: createFiles: op(c): op(d): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3344549177": device or resource busy Mar 17 18:48:23.400995 ignition[1003]: ERROR : files: createFilesystemsFiles: createFiles: op(c): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3344549177", trying btrfs: device or resource busy Mar 17 18:48:23.400995 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3344549177" Mar 17 18:48:23.400995 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3344549177" Mar 17 18:48:23.400995 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [started] unmounting "/mnt/oem3344549177" Mar 17 18:48:23.400995 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [finished] unmounting "/mnt/oem3344549177" Mar 17 18:48:23.400995 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Mar 17 18:48:23.400995 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Mar 17 18:48:23.400995 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:48:23.400995 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2426055156" Mar 17 18:48:23.400995 ignition[1003]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2426055156": device or resource busy Mar 17 18:48:23.303582 systemd[1]: mnt-oem3344549177.mount: Deactivated successfully. Mar 17 18:48:23.566869 ignition[1003]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2426055156", trying btrfs: device or resource busy Mar 17 18:48:23.566869 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2426055156" Mar 17 18:48:23.566869 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2426055156" Mar 17 18:48:23.566869 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem2426055156" Mar 17 18:48:23.566869 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem2426055156" Mar 17 18:48:23.566869 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Mar 17 18:48:23.566869 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:48:23.566869 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(14): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 17 18:48:23.329519 systemd[1]: mnt-oem2426055156.mount: Deactivated successfully. Mar 17 18:48:23.735515 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(14): GET result: OK Mar 17 18:48:23.925855 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:48:23.925855 ignition[1003]: INFO : files: op(15): [started] processing unit "waagent.service" Mar 17 18:48:23.925855 ignition[1003]: INFO : files: op(15): [finished] processing unit "waagent.service" Mar 17 18:48:23.925855 ignition[1003]: INFO : files: op(16): [started] processing unit "nvidia.service" Mar 17 18:48:23.925855 ignition[1003]: INFO : files: op(16): [finished] processing unit "nvidia.service" Mar 17 18:48:23.925855 ignition[1003]: INFO : files: op(17): [started] processing unit "containerd.service" Mar 17 18:48:24.017403 kernel: audit: type=1130 audit(1742237303.945:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:23.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:23.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:23.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:23.941378 systemd[1]: Finished ignition-files.service. Mar 17 18:48:24.026712 ignition[1003]: INFO : files: op(17): op(18): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 18:48:24.026712 ignition[1003]: INFO : files: op(17): op(18): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 18:48:24.026712 ignition[1003]: INFO : files: op(17): [finished] processing unit "containerd.service" Mar 17 18:48:24.026712 ignition[1003]: INFO : files: op(19): [started] processing unit "prepare-helm.service" Mar 17 18:48:24.026712 ignition[1003]: INFO : files: op(19): op(1a): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:48:24.026712 ignition[1003]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:48:24.026712 ignition[1003]: INFO : files: op(19): [finished] processing unit "prepare-helm.service" Mar 17 18:48:24.026712 ignition[1003]: INFO : files: op(1b): [started] setting preset to enabled for "waagent.service" Mar 17 18:48:24.026712 ignition[1003]: INFO : files: op(1b): [finished] setting preset to enabled for "waagent.service" Mar 17 18:48:24.026712 ignition[1003]: INFO : files: op(1c): [started] setting preset to enabled for "nvidia.service" Mar 17 18:48:24.026712 ignition[1003]: INFO : files: op(1c): [finished] setting preset to enabled for "nvidia.service" Mar 17 18:48:24.026712 ignition[1003]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:48:24.026712 ignition[1003]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:48:24.026712 ignition[1003]: INFO : files: createResultFile: createFiles: op(1e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:48:24.026712 ignition[1003]: INFO : files: createResultFile: createFiles: op(1e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:48:24.026712 ignition[1003]: INFO : files: files passed Mar 17 18:48:24.026712 ignition[1003]: INFO : Ignition finished successfully Mar 17 18:48:24.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:23.947011 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:48:24.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:23.973555 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:48:24.247402 initrd-setup-root-after-ignition[1028]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:48:23.974915 systemd[1]: Starting ignition-quench.service... Mar 17 18:48:23.993352 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:48:23.993473 systemd[1]: Finished ignition-quench.service. Mar 17 18:48:23.998447 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:48:24.010751 systemd[1]: Reached target ignition-complete.target. Mar 17 18:48:24.023096 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:48:24.057242 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:48:24.057367 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:48:24.064274 systemd[1]: Reached target initrd-fs.target. Mar 17 18:48:24.076190 systemd[1]: Reached target initrd.target. Mar 17 18:48:24.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.087218 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:48:24.088127 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:48:24.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.147187 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:48:24.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.160240 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:48:24.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.182492 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:48:24.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.194805 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:48:24.209420 systemd[1]: Stopped target timers.target. Mar 17 18:48:24.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.421472 iscsid[853]: iscsid shutting down. Mar 17 18:48:24.217980 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:48:24.439380 ignition[1041]: INFO : Ignition 2.14.0 Mar 17 18:48:24.439380 ignition[1041]: INFO : Stage: umount Mar 17 18:48:24.439380 ignition[1041]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:48:24.439380 ignition[1041]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Mar 17 18:48:24.439380 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 18:48:24.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.218108 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:48:24.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.505741 ignition[1041]: INFO : umount: umount passed Mar 17 18:48:24.505741 ignition[1041]: INFO : Ignition finished successfully Mar 17 18:48:24.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.227630 systemd[1]: Stopped target initrd.target. Mar 17 18:48:24.236944 systemd[1]: Stopped target basic.target. Mar 17 18:48:24.251483 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:48:24.265131 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:48:24.274057 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:48:24.284082 systemd[1]: Stopped target remote-fs.target. Mar 17 18:48:24.292792 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:48:24.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.302361 systemd[1]: Stopped target sysinit.target. Mar 17 18:48:24.310903 systemd[1]: Stopped target local-fs.target. Mar 17 18:48:24.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.319280 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:48:24.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.329169 systemd[1]: Stopped target swap.target. Mar 17 18:48:24.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.337436 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:48:24.337596 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:48:24.346150 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:48:24.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.354392 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:48:24.354548 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:48:24.366094 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:48:24.366243 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:48:24.375518 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:48:24.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.375652 systemd[1]: Stopped ignition-files.service. Mar 17 18:48:24.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.687000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:48:24.383718 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 18:48:24.383865 systemd[1]: Stopped flatcar-metadata-hostname.service. Mar 17 18:48:24.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.393804 systemd[1]: Stopping ignition-mount.service... Mar 17 18:48:24.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.401087 systemd[1]: Stopping iscsid.service... Mar 17 18:48:24.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.406381 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:48:24.408987 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:48:24.420524 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:48:24.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.431300 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:48:24.434215 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:48:24.449434 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:48:24.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.449549 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:48:24.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.460824 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:48:24.821582 kernel: hv_netvsc 002248bd-a39a-0022-48bd-a39a002248bd eth0: Data path switched from VF: enP62433s1 Mar 17 18:48:24.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.460934 systemd[1]: Stopped iscsid.service. Mar 17 18:48:24.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.471188 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:48:24.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.471278 systemd[1]: Stopped ignition-mount.service. Mar 17 18:48:24.490808 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:48:24.491365 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:48:24.491514 systemd[1]: Stopped ignition-disks.service. Mar 17 18:48:24.501171 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:48:24.501261 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:48:24.510336 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 18:48:24.510424 systemd[1]: Stopped ignition-fetch.service. Mar 17 18:48:24.515034 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:48:24.515120 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:48:24.524614 systemd[1]: Stopped target paths.target. Mar 17 18:48:24.533791 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:48:24.541391 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:48:24.546781 systemd[1]: Stopped target slices.target. Mar 17 18:48:24.555881 systemd[1]: Stopped target sockets.target. Mar 17 18:48:24.564788 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:48:24.564882 systemd[1]: Closed iscsid.socket. Mar 17 18:48:24.573096 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:48:24.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:24.573187 systemd[1]: Stopped ignition-setup.service. Mar 17 18:48:24.581678 systemd[1]: Stopping iscsiuio.service... Mar 17 18:48:24.591491 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:48:24.591585 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:48:24.599019 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:48:24.599105 systemd[1]: Stopped iscsiuio.service. Mar 17 18:48:24.607437 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:48:24.607514 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:48:24.615279 systemd[1]: Stopped target network.target. Mar 17 18:48:24.622078 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:48:24.622112 systemd[1]: Closed iscsiuio.socket. Mar 17 18:48:24.989000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:48:24.989000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:48:24.989000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:48:24.990000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:48:24.990000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:48:24.631110 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:48:24.631157 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:48:24.640210 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:48:24.648359 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:48:24.661889 systemd-networkd[845]: eth0: DHCPv6 lease lost Mar 17 18:48:25.022683 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Mar 17 18:48:25.017000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:48:24.669397 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:48:24.669505 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:48:24.678464 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:48:24.678566 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:48:24.688040 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:48:24.688087 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:48:24.697045 systemd[1]: Stopping network-cleanup.service... Mar 17 18:48:24.706277 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:48:24.706344 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:48:24.711642 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:48:24.711694 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:48:24.724367 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:48:24.724416 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:48:24.729644 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:48:24.744614 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:48:24.749202 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:48:24.749518 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:48:24.757793 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:48:24.757835 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:48:24.765948 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:48:24.765990 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:48:24.775833 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:48:24.775898 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:48:24.784853 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:48:24.784901 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:48:24.794520 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:48:24.794560 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:48:24.812500 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:48:24.821411 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:48:24.821496 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:48:24.826924 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:48:24.827022 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:48:24.920182 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:48:24.920296 systemd[1]: Stopped network-cleanup.service. Mar 17 18:48:24.927918 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:48:24.938638 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:48:24.987673 systemd[1]: Switching root. Mar 17 18:48:25.023734 systemd-journald[276]: Journal stopped Mar 17 18:48:35.968633 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:48:35.968654 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:48:35.968664 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:48:35.968674 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:48:35.968681 kernel: SELinux: policy capability open_perms=1 Mar 17 18:48:35.968689 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:48:35.968699 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:48:35.968707 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:48:35.968715 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:48:35.968723 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:48:35.968731 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:48:35.968740 kernel: kauditd_printk_skb: 48 callbacks suppressed Mar 17 18:48:35.968749 kernel: audit: type=1403 audit(1742237307.720:87): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:48:35.968759 systemd[1]: Successfully loaded SELinux policy in 271.866ms. Mar 17 18:48:35.968780 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.850ms. Mar 17 18:48:35.968795 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:48:35.968804 systemd[1]: Detected virtualization microsoft. Mar 17 18:48:35.968813 systemd[1]: Detected architecture arm64. Mar 17 18:48:35.968821 systemd[1]: Detected first boot. Mar 17 18:48:35.968831 systemd[1]: Hostname set to . Mar 17 18:48:35.968839 systemd[1]: Initializing machine ID from random generator. Mar 17 18:48:35.968848 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:48:35.968859 kernel: audit: type=1400 audit(1742237309.514:88): avc: denied { associate } for pid=1091 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:48:35.968869 kernel: audit: type=1300 audit(1742237309.514:88): arch=c00000b7 syscall=5 success=yes exit=0 a0=40000225f2 a1=4000028810 a2=40000266c0 a3=32 items=0 ppid=1074 pid=1091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:48:35.968878 kernel: audit: type=1327 audit(1742237309.514:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:48:35.968887 kernel: audit: type=1400 audit(1742237309.523:89): avc: denied { associate } for pid=1091 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:48:35.968897 kernel: audit: type=1300 audit(1742237309.523:89): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000226c9 a2=1ed a3=0 items=2 ppid=1074 pid=1091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:48:35.968908 kernel: audit: type=1307 audit(1742237309.523:89): cwd="/" Mar 17 18:48:35.968917 kernel: audit: type=1302 audit(1742237309.523:89): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:35.968926 kernel: audit: type=1302 audit(1742237309.523:89): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:35.968935 kernel: audit: type=1327 audit(1742237309.523:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:48:35.968944 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:48:35.968954 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:48:35.968963 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:48:35.968975 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:48:35.968984 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:48:35.968993 systemd[1]: Unnecessary job was removed for dev-sda6.device. Mar 17 18:48:35.969002 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:48:35.969011 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:48:35.969020 systemd[1]: Created slice system-getty.slice. Mar 17 18:48:35.969031 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:48:35.969042 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:48:35.969051 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:48:35.969061 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:48:35.969070 systemd[1]: Created slice user.slice. Mar 17 18:48:35.969079 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:48:35.969088 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:48:35.969097 systemd[1]: Set up automount boot.automount. Mar 17 18:48:35.969107 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:48:35.969116 systemd[1]: Reached target integritysetup.target. Mar 17 18:48:35.969127 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:48:35.969136 systemd[1]: Reached target remote-fs.target. Mar 17 18:48:35.969146 systemd[1]: Reached target slices.target. Mar 17 18:48:35.969155 systemd[1]: Reached target swap.target. Mar 17 18:48:35.969164 systemd[1]: Reached target torcx.target. Mar 17 18:48:35.969173 systemd[1]: Reached target veritysetup.target. Mar 17 18:48:35.969182 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:48:35.969192 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:48:35.969202 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:48:35.969211 kernel: audit: type=1400 audit(1742237315.544:90): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:48:35.969221 kernel: audit: type=1335 audit(1742237315.544:91): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Mar 17 18:48:35.969230 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:48:35.969239 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:48:35.969248 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:48:35.969257 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:48:35.969268 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:48:35.969277 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:48:35.969286 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:48:35.969296 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:48:35.969305 systemd[1]: Mounting media.mount... Mar 17 18:48:35.969315 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:48:35.969325 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:48:35.969334 systemd[1]: Mounting tmp.mount... Mar 17 18:48:35.969344 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:48:35.969353 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:48:35.969363 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:48:35.969372 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:48:35.969381 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:48:35.969390 systemd[1]: Starting modprobe@drm.service... Mar 17 18:48:35.969399 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:48:35.969410 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:48:35.969419 systemd[1]: Starting modprobe@loop.service... Mar 17 18:48:35.969428 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:48:35.969438 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 17 18:48:35.969447 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Mar 17 18:48:35.969456 systemd[1]: Starting systemd-journald.service... Mar 17 18:48:35.969465 kernel: loop: module loaded Mar 17 18:48:35.969474 kernel: fuse: init (API version 7.34) Mar 17 18:48:35.969483 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:48:35.969493 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:48:35.969504 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:48:35.969513 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:48:35.969523 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:48:35.969532 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:48:35.969541 kernel: audit: type=1305 audit(1742237315.965:92): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:48:35.969555 systemd-journald[1219]: Journal started Mar 17 18:48:35.969596 systemd-journald[1219]: Runtime Journal (/run/log/journal/cc0eb18ccd694229bb985f758e6ba30a) is 8.0M, max 78.5M, 70.5M free. Mar 17 18:48:35.544000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:48:35.544000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Mar 17 18:48:35.965000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:48:35.982934 systemd[1]: Mounted media.mount. Mar 17 18:48:35.983012 kernel: audit: type=1300 audit(1742237315.965:92): arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=fffff7758760 a2=4000 a3=1 items=0 ppid=1 pid=1219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:48:35.965000 audit[1219]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=fffff7758760 a2=4000 a3=1 items=0 ppid=1 pid=1219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:48:35.965000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:48:36.020800 kernel: audit: type=1327 audit(1742237315.965:92): proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:48:36.032087 systemd[1]: Started systemd-journald.service. Mar 17 18:48:36.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.033394 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:48:36.051795 kernel: audit: type=1130 audit(1742237316.032:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.056023 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:48:36.060756 systemd[1]: Mounted tmp.mount. Mar 17 18:48:36.064712 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:48:36.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.069763 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:48:36.088806 kernel: audit: type=1130 audit(1742237316.069:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.093109 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:48:36.093289 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:48:36.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.114547 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:48:36.114714 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:48:36.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.145163 kernel: audit: type=1130 audit(1742237316.092:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.145247 kernel: audit: type=1130 audit(1742237316.113:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.143563 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:48:36.143751 systemd[1]: Finished modprobe@drm.service. Mar 17 18:48:36.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.146809 kernel: audit: type=1131 audit(1742237316.113:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.171880 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:48:36.172157 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:48:36.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.177591 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:48:36.177868 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:48:36.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.183125 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:48:36.183306 systemd[1]: Finished modprobe@loop.service. Mar 17 18:48:36.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.188519 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:48:36.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.194189 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:48:36.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.200022 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:48:36.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.205220 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:48:36.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.211039 systemd[1]: Reached target network-pre.target. Mar 17 18:48:36.217185 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:48:36.223028 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:48:36.227163 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:48:36.246065 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:48:36.252209 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:48:36.256930 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:48:36.258330 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:48:36.263283 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:48:36.264752 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:48:36.270898 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:48:36.276899 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:48:36.284036 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:48:36.289223 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:48:36.295708 udevadm[1243]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 18:48:36.312068 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:48:36.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.317405 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:48:36.354211 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:48:36.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.373357 systemd-journald[1219]: Time spent on flushing to /var/log/journal/cc0eb18ccd694229bb985f758e6ba30a is 15.980ms for 1037 entries. Mar 17 18:48:36.373357 systemd-journald[1219]: System Journal (/var/log/journal/cc0eb18ccd694229bb985f758e6ba30a) is 8.0M, max 2.6G, 2.6G free. Mar 17 18:48:36.471041 systemd-journald[1219]: Received client request to flush runtime journal. Mar 17 18:48:36.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.472136 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:48:36.802880 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:48:36.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:36.809061 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:48:37.198839 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:48:37.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.233597 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:48:37.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.240383 systemd[1]: Starting systemd-udevd.service... Mar 17 18:48:37.260833 systemd-udevd[1254]: Using default interface naming scheme 'v252'. Mar 17 18:48:37.531260 systemd[1]: Started systemd-udevd.service. Mar 17 18:48:37.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.553018 systemd[1]: Starting systemd-networkd.service... Mar 17 18:48:37.574260 systemd[1]: Found device dev-ttyAMA0.device. Mar 17 18:48:37.590602 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:48:37.642834 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 18:48:37.665057 systemd[1]: Started systemd-userdbd.service. Mar 17 18:48:37.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:37.668000 audit[1271]: AVC avc: denied { confidentiality } for pid=1271 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 18:48:37.699420 kernel: hv_vmbus: registering driver hv_balloon Mar 17 18:48:37.699546 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 17 18:48:37.699578 kernel: hv_vmbus: registering driver hyperv_fb Mar 17 18:48:37.699602 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 17 18:48:37.668000 audit[1271]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaab00867b10 a1=aa2c a2=ffff7f7f24b0 a3=aaab007c1010 items=12 ppid=1254 pid=1271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:48:37.668000 audit: CWD cwd="/" Mar 17 18:48:37.668000 audit: PATH item=0 name=(null) inode=6419 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:37.668000 audit: PATH item=1 name=(null) inode=10050 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:37.668000 audit: PATH item=2 name=(null) inode=10050 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:37.668000 audit: PATH item=3 name=(null) inode=10051 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:37.668000 audit: PATH item=4 name=(null) inode=10050 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:37.668000 audit: PATH item=5 name=(null) inode=10052 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:37.668000 audit: PATH item=6 name=(null) inode=10050 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:37.668000 audit: PATH item=7 name=(null) inode=10053 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:37.668000 audit: PATH item=8 name=(null) inode=10050 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:37.668000 audit: PATH item=9 name=(null) inode=10054 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:37.668000 audit: PATH item=10 name=(null) inode=10050 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:37.668000 audit: PATH item=11 name=(null) inode=10055 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:48:37.668000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 18:48:37.720955 kernel: hv_utils: Registering HyperV Utility Driver Mar 17 18:48:37.721037 kernel: hv_vmbus: registering driver hv_utils Mar 17 18:48:37.725756 kernel: hv_utils: Heartbeat IC version 3.0 Mar 17 18:48:37.725875 kernel: hv_utils: Shutdown IC version 3.2 Mar 17 18:48:37.730525 kernel: hv_utils: TimeSync IC version 4.0 Mar 17 18:48:38.019940 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 17 18:48:38.020043 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 17 18:48:38.038260 kernel: Console: switching to colour dummy device 80x25 Mar 17 18:48:38.040841 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 18:48:38.214061 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:48:38.221306 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:48:38.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:38.227763 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:48:38.306123 systemd-networkd[1275]: lo: Link UP Mar 17 18:48:38.306136 systemd-networkd[1275]: lo: Gained carrier Mar 17 18:48:38.306552 systemd-networkd[1275]: Enumeration completed Mar 17 18:48:38.306694 systemd[1]: Started systemd-networkd.service. Mar 17 18:48:38.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:38.313296 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:48:38.318580 systemd-networkd[1275]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:48:38.366840 kernel: mlx5_core f3e1:00:02.0 enP62433s1: Link up Mar 17 18:48:38.392855 kernel: hv_netvsc 002248bd-a39a-0022-48bd-a39a002248bd eth0: Data path switched to VF: enP62433s1 Mar 17 18:48:38.393689 systemd-networkd[1275]: enP62433s1: Link UP Mar 17 18:48:38.394048 systemd-networkd[1275]: eth0: Link UP Mar 17 18:48:38.394058 systemd-networkd[1275]: eth0: Gained carrier Mar 17 18:48:38.399328 systemd-networkd[1275]: enP62433s1: Gained carrier Mar 17 18:48:38.411941 systemd-networkd[1275]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 18:48:38.443390 lvm[1332]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:48:38.481055 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:48:38.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:38.486571 systemd[1]: Reached target cryptsetup.target. Mar 17 18:48:38.492493 systemd[1]: Starting lvm2-activation.service... Mar 17 18:48:38.497062 lvm[1335]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:48:38.515940 systemd[1]: Finished lvm2-activation.service. Mar 17 18:48:38.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:38.521425 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:48:38.526495 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:48:38.526527 systemd[1]: Reached target local-fs.target. Mar 17 18:48:38.531228 systemd[1]: Reached target machines.target. Mar 17 18:48:38.537192 systemd[1]: Starting ldconfig.service... Mar 17 18:48:38.557457 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:48:38.557524 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:48:38.558806 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:48:38.564267 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:48:38.571063 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:48:38.577267 systemd[1]: Starting systemd-sysext.service... Mar 17 18:48:38.593676 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1338 (bootctl) Mar 17 18:48:38.595128 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:48:38.938062 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:48:38.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.047064 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:48:39.052723 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:48:39.053012 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:48:39.211855 kernel: loop0: detected capacity change from 0 to 194096 Mar 17 18:48:39.373612 systemd-fsck[1346]: fsck.fat 4.2 (2021-01-31) Mar 17 18:48:39.373612 systemd-fsck[1346]: /dev/sda1: 236 files, 117179/258078 clusters Mar 17 18:48:39.375154 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:48:39.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.383280 systemd[1]: Mounting boot.mount... Mar 17 18:48:39.426534 systemd[1]: Mounted boot.mount. Mar 17 18:48:39.437213 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:48:39.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.503865 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:48:39.522845 kernel: loop1: detected capacity change from 0 to 194096 Mar 17 18:48:39.538692 (sd-sysext)[1361]: Using extensions 'kubernetes'. Mar 17 18:48:39.539862 (sd-sysext)[1361]: Merged extensions into '/usr'. Mar 17 18:48:39.557476 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:48:39.561684 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:48:39.563116 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:48:39.570077 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:48:39.576177 systemd[1]: Starting modprobe@loop.service... Mar 17 18:48:39.580111 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:48:39.580265 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:48:39.583354 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:48:39.588335 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:48:39.588506 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:48:39.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.593878 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:48:39.594051 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:48:39.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.599751 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:48:39.600104 systemd[1]: Finished modprobe@loop.service. Mar 17 18:48:39.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.605320 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:48:39.605397 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:48:39.606597 systemd[1]: Finished systemd-sysext.service. Mar 17 18:48:39.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.613570 systemd[1]: Starting ensure-sysext.service... Mar 17 18:48:39.619569 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:48:39.629088 systemd[1]: Reloading. Mar 17 18:48:39.646483 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:48:39.663041 /usr/lib/systemd/system-generators/torcx-generator[1396]: time="2025-03-17T18:48:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:48:39.663404 /usr/lib/systemd/system-generators/torcx-generator[1396]: time="2025-03-17T18:48:39Z" level=info msg="torcx already run" Mar 17 18:48:39.685251 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:48:39.698685 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:48:39.775223 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:48:39.775243 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:48:39.792971 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:48:39.864970 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:48:39.872042 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:48:39.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.887758 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:48:39.889282 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:48:39.894859 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:48:39.901281 systemd[1]: Starting modprobe@loop.service... Mar 17 18:48:39.905755 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:48:39.906055 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:48:39.907080 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:48:39.907387 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:48:39.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.915255 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:48:39.915547 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:48:39.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.921408 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:48:39.921727 systemd[1]: Finished modprobe@loop.service. Mar 17 18:48:39.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.929146 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:48:39.930811 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:48:39.936594 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:48:39.942540 systemd[1]: Starting modprobe@loop.service... Mar 17 18:48:39.947061 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:48:39.947339 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:48:39.948338 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:48:39.948621 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:48:39.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.957102 systemd-networkd[1275]: eth0: Gained IPv6LL Mar 17 18:48:39.958124 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:48:39.958312 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:48:39.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.963713 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:48:39.963927 systemd[1]: Finished modprobe@loop.service. Mar 17 18:48:39.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.969285 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:48:39.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:39.978116 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:48:39.979646 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:48:39.985262 systemd[1]: Starting modprobe@drm.service... Mar 17 18:48:39.991985 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:48:39.998051 systemd[1]: Starting modprobe@loop.service... Mar 17 18:48:40.003841 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:48:40.003997 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:48:40.005142 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:48:40.005323 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:48:40.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.010766 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:48:40.010947 systemd[1]: Finished modprobe@drm.service. Mar 17 18:48:40.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.016247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:48:40.016430 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:48:40.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.021803 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:48:40.022070 systemd[1]: Finished modprobe@loop.service. Mar 17 18:48:40.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.028149 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:48:40.028221 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:48:40.029467 systemd[1]: Finished ensure-sysext.service. Mar 17 18:48:40.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.238693 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:48:40.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.246072 systemd[1]: Starting audit-rules.service... Mar 17 18:48:40.251183 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:48:40.256781 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:48:40.264098 systemd[1]: Starting systemd-resolved.service... Mar 17 18:48:40.270362 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:48:40.275927 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:48:40.281226 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:48:40.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.286643 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:48:40.305000 audit[1497]: SYSTEM_BOOT pid=1497 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.310661 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:48:40.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.370617 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:48:40.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.375477 systemd[1]: Reached target time-set.target. Mar 17 18:48:40.462521 systemd-resolved[1494]: Positive Trust Anchors: Mar 17 18:48:40.462535 systemd-resolved[1494]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:48:40.462563 systemd-resolved[1494]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:48:40.517606 systemd-resolved[1494]: Using system hostname 'ci-3510.3.7-a-2597755324'. Mar 17 18:48:40.519949 systemd[1]: Started systemd-resolved.service. Mar 17 18:48:40.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.525248 systemd[1]: Reached target network.target. Mar 17 18:48:40.532048 systemd[1]: Reached target network-online.target. Mar 17 18:48:40.537159 systemd[1]: Reached target nss-lookup.target. Mar 17 18:48:40.542207 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:48:40.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:48:40.595000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:48:40.595000 audit[1513]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffda5135e0 a2=420 a3=0 items=0 ppid=1490 pid=1513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:48:40.595000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:48:40.596477 augenrules[1513]: No rules Mar 17 18:48:40.597844 systemd[1]: Finished audit-rules.service. Mar 17 18:48:40.806981 systemd-timesyncd[1496]: Contacted time server 216.31.17.12:123 (0.flatcar.pool.ntp.org). Mar 17 18:48:40.807442 systemd-timesyncd[1496]: Initial clock synchronization to Mon 2025-03-17 18:48:40.803230 UTC. Mar 17 18:48:45.677190 ldconfig[1337]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:48:45.693512 systemd[1]: Finished ldconfig.service. Mar 17 18:48:45.700626 systemd[1]: Starting systemd-update-done.service... Mar 17 18:48:45.729495 systemd[1]: Finished systemd-update-done.service. Mar 17 18:48:45.734791 systemd[1]: Reached target sysinit.target. Mar 17 18:48:45.739221 systemd[1]: Started motdgen.path. Mar 17 18:48:45.743047 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:48:45.749717 systemd[1]: Started logrotate.timer. Mar 17 18:48:45.753883 systemd[1]: Started mdadm.timer. Mar 17 18:48:45.757541 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:48:45.762471 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:48:45.762503 systemd[1]: Reached target paths.target. Mar 17 18:48:45.766705 systemd[1]: Reached target timers.target. Mar 17 18:48:45.771459 systemd[1]: Listening on dbus.socket. Mar 17 18:48:45.776728 systemd[1]: Starting docker.socket... Mar 17 18:48:45.781727 systemd[1]: Listening on sshd.socket. Mar 17 18:48:45.785967 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:48:45.786400 systemd[1]: Listening on docker.socket. Mar 17 18:48:45.790529 systemd[1]: Reached target sockets.target. Mar 17 18:48:45.794933 systemd[1]: Reached target basic.target. Mar 17 18:48:45.799230 systemd[1]: System is tainted: cgroupsv1 Mar 17 18:48:45.799281 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:48:45.799303 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:48:45.800486 systemd[1]: Starting containerd.service... Mar 17 18:48:45.805544 systemd[1]: Starting dbus.service... Mar 17 18:48:45.810141 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:48:45.815732 systemd[1]: Starting extend-filesystems.service... Mar 17 18:48:45.820133 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:48:45.821606 systemd[1]: Starting kubelet.service... Mar 17 18:48:45.828201 systemd[1]: Starting motdgen.service... Mar 17 18:48:45.833710 systemd[1]: Started nvidia.service. Mar 17 18:48:45.839952 systemd[1]: Starting prepare-helm.service... Mar 17 18:48:45.845170 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:48:45.850903 systemd[1]: Starting sshd-keygen.service... Mar 17 18:48:45.856668 systemd[1]: Starting systemd-logind.service... Mar 17 18:48:45.861716 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:48:45.861804 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:48:45.863234 systemd[1]: Starting update-engine.service... Mar 17 18:48:45.870293 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:48:45.879576 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:48:45.879860 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:48:45.886419 jq[1528]: false Mar 17 18:48:45.886711 jq[1548]: true Mar 17 18:48:45.901736 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:48:45.902028 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:48:45.921231 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:48:45.921508 systemd[1]: Finished motdgen.service. Mar 17 18:48:45.945837 jq[1555]: true Mar 17 18:48:45.959407 extend-filesystems[1529]: Found loop1 Mar 17 18:48:45.959407 extend-filesystems[1529]: Found sda Mar 17 18:48:45.959407 extend-filesystems[1529]: Found sda1 Mar 17 18:48:45.959407 extend-filesystems[1529]: Found sda2 Mar 17 18:48:45.959407 extend-filesystems[1529]: Found sda3 Mar 17 18:48:45.959407 extend-filesystems[1529]: Found usr Mar 17 18:48:45.959407 extend-filesystems[1529]: Found sda4 Mar 17 18:48:45.959407 extend-filesystems[1529]: Found sda6 Mar 17 18:48:45.959407 extend-filesystems[1529]: Found sda7 Mar 17 18:48:45.959407 extend-filesystems[1529]: Found sda9 Mar 17 18:48:45.959407 extend-filesystems[1529]: Checking size of /dev/sda9 Mar 17 18:48:46.153227 extend-filesystems[1529]: Old size kept for /dev/sda9 Mar 17 18:48:46.153227 extend-filesystems[1529]: Found sr0 Mar 17 18:48:46.095224 dbus-daemon[1527]: [system] SELinux support is enabled Mar 17 18:48:46.168689 tar[1552]: linux-arm64/helm Mar 17 18:48:46.168866 bash[1586]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:48:46.168945 env[1558]: time="2025-03-17T18:48:46.071229753Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:48:45.998291 systemd-logind[1542]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 18:48:46.146694 dbus-daemon[1527]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 18:48:45.999524 systemd-logind[1542]: New seat seat0. Mar 17 18:48:46.036561 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:48:46.036812 systemd[1]: Finished extend-filesystems.service. Mar 17 18:48:46.092930 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:48:46.114044 systemd[1]: Started dbus.service. Mar 17 18:48:46.125038 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:48:46.125065 systemd[1]: Reached target system-config.target. Mar 17 18:48:46.130667 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:48:46.130683 systemd[1]: Reached target user-config.target. Mar 17 18:48:46.146568 systemd[1]: Started systemd-logind.service. Mar 17 18:48:46.181765 env[1558]: time="2025-03-17T18:48:46.181711000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:48:46.181908 env[1558]: time="2025-03-17T18:48:46.181884812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:48:46.184750 env[1558]: time="2025-03-17T18:48:46.183499550Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:48:46.184750 env[1558]: time="2025-03-17T18:48:46.183541024Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:48:46.184750 env[1558]: time="2025-03-17T18:48:46.183847854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:48:46.184750 env[1558]: time="2025-03-17T18:48:46.183868571Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:48:46.184750 env[1558]: time="2025-03-17T18:48:46.183885568Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:48:46.184750 env[1558]: time="2025-03-17T18:48:46.183895326Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:48:46.184750 env[1558]: time="2025-03-17T18:48:46.183977993Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:48:46.184750 env[1558]: time="2025-03-17T18:48:46.184253548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:48:46.184750 env[1558]: time="2025-03-17T18:48:46.184450436Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:48:46.184750 env[1558]: time="2025-03-17T18:48:46.184469593Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:48:46.185196 env[1558]: time="2025-03-17T18:48:46.184529823Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:48:46.185196 env[1558]: time="2025-03-17T18:48:46.184543541Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:48:46.206921 env[1558]: time="2025-03-17T18:48:46.204896841Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:48:46.206921 env[1558]: time="2025-03-17T18:48:46.204959071Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:48:46.206921 env[1558]: time="2025-03-17T18:48:46.204976988Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:48:46.206921 env[1558]: time="2025-03-17T18:48:46.205022941Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:48:46.206921 env[1558]: time="2025-03-17T18:48:46.205041018Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:48:46.206921 env[1558]: time="2025-03-17T18:48:46.205055575Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:48:46.206921 env[1558]: time="2025-03-17T18:48:46.205071413Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:48:46.206921 env[1558]: time="2025-03-17T18:48:46.205447072Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:48:46.206921 env[1558]: time="2025-03-17T18:48:46.205465029Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:48:46.206921 env[1558]: time="2025-03-17T18:48:46.205479227Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:48:46.206921 env[1558]: time="2025-03-17T18:48:46.205491585Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:48:46.206921 env[1558]: time="2025-03-17T18:48:46.205506942Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:48:46.206921 env[1558]: time="2025-03-17T18:48:46.205663677Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:48:46.206921 env[1558]: time="2025-03-17T18:48:46.205734025Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:48:46.207295 env[1558]: time="2025-03-17T18:48:46.206088448Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:48:46.207295 env[1558]: time="2025-03-17T18:48:46.206117243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.207295 env[1558]: time="2025-03-17T18:48:46.206130361Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:48:46.207295 env[1558]: time="2025-03-17T18:48:46.206181913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.207295 env[1558]: time="2025-03-17T18:48:46.206196071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.207295 env[1558]: time="2025-03-17T18:48:46.206212788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.207295 env[1558]: time="2025-03-17T18:48:46.206227105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.207295 env[1558]: time="2025-03-17T18:48:46.206238904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.207295 env[1558]: time="2025-03-17T18:48:46.206250582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.207295 env[1558]: time="2025-03-17T18:48:46.206262180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.207295 env[1558]: time="2025-03-17T18:48:46.206276937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.207295 env[1558]: time="2025-03-17T18:48:46.206290015Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:48:46.207295 env[1558]: time="2025-03-17T18:48:46.206412075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.207295 env[1558]: time="2025-03-17T18:48:46.206434352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.207295 env[1558]: time="2025-03-17T18:48:46.206447350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.207647 env[1558]: time="2025-03-17T18:48:46.206460468Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:48:46.207647 env[1558]: time="2025-03-17T18:48:46.206475665Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:48:46.207647 env[1558]: time="2025-03-17T18:48:46.206486263Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:48:46.207647 env[1558]: time="2025-03-17T18:48:46.206502901Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:48:46.207647 env[1558]: time="2025-03-17T18:48:46.206537615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:48:46.207761 env[1558]: time="2025-03-17T18:48:46.206736983Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:48:46.207761 env[1558]: time="2025-03-17T18:48:46.206792094Z" level=info msg="Connect containerd service" Mar 17 18:48:46.224530 env[1558]: time="2025-03-17T18:48:46.213874306Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:48:46.224530 env[1558]: time="2025-03-17T18:48:46.219942842Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:48:46.224530 env[1558]: time="2025-03-17T18:48:46.220242153Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:48:46.224530 env[1558]: time="2025-03-17T18:48:46.220281787Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:48:46.224530 env[1558]: time="2025-03-17T18:48:46.220332778Z" level=info msg="containerd successfully booted in 0.153740s" Mar 17 18:48:46.220453 systemd[1]: Started containerd.service. Mar 17 18:48:46.235442 env[1558]: time="2025-03-17T18:48:46.235363381Z" level=info msg="Start subscribing containerd event" Mar 17 18:48:46.235442 env[1558]: time="2025-03-17T18:48:46.235444568Z" level=info msg="Start recovering state" Mar 17 18:48:46.235604 env[1558]: time="2025-03-17T18:48:46.235531434Z" level=info msg="Start event monitor" Mar 17 18:48:46.235604 env[1558]: time="2025-03-17T18:48:46.235564349Z" level=info msg="Start snapshots syncer" Mar 17 18:48:46.235604 env[1558]: time="2025-03-17T18:48:46.235577787Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:48:46.235604 env[1558]: time="2025-03-17T18:48:46.235586905Z" level=info msg="Start streaming server" Mar 17 18:48:46.254032 systemd[1]: nvidia.service: Deactivated successfully. Mar 17 18:48:46.547376 update_engine[1546]: I0317 18:48:46.535362 1546 main.cc:92] Flatcar Update Engine starting Mar 17 18:48:46.636852 systemd[1]: Started update-engine.service. Mar 17 18:48:46.643560 systemd[1]: Started locksmithd.service. Mar 17 18:48:46.648274 update_engine[1546]: I0317 18:48:46.648236 1546 update_check_scheduler.cc:74] Next update check in 3m20s Mar 17 18:48:46.841436 tar[1552]: linux-arm64/LICENSE Mar 17 18:48:46.841639 tar[1552]: linux-arm64/README.md Mar 17 18:48:46.848988 systemd[1]: Finished prepare-helm.service. Mar 17 18:48:46.879892 systemd[1]: Started kubelet.service. Mar 17 18:48:47.352262 kubelet[1647]: E0317 18:48:47.352206 1647 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:48:47.354232 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:48:47.354383 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:48:47.921742 locksmithd[1639]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:48:48.097056 sshd_keygen[1549]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:48:48.115563 systemd[1]: Finished sshd-keygen.service. Mar 17 18:48:48.122193 systemd[1]: Starting issuegen.service... Mar 17 18:48:48.127605 systemd[1]: Started waagent.service. Mar 17 18:48:48.132776 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:48:48.133080 systemd[1]: Finished issuegen.service. Mar 17 18:48:48.139174 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:48:48.159388 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:48:48.166093 systemd[1]: Started getty@tty1.service. Mar 17 18:48:48.171880 systemd[1]: Started serial-getty@ttyAMA0.service. Mar 17 18:48:48.176988 systemd[1]: Reached target getty.target. Mar 17 18:48:48.181217 systemd[1]: Reached target multi-user.target. Mar 17 18:48:48.187777 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:48:48.201973 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:48:48.202337 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:48:48.208781 systemd[1]: Startup finished in 14.408s (kernel) + 20.626s (userspace) = 35.034s. Mar 17 18:48:48.852951 login[1677]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Mar 17 18:48:48.854041 login[1676]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 18:48:48.908671 systemd[1]: Created slice user-500.slice. Mar 17 18:48:48.909909 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:48:48.913212 systemd-logind[1542]: New session 2 of user core. Mar 17 18:48:48.937221 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:48:48.938594 systemd[1]: Starting user@500.service... Mar 17 18:48:48.956731 (systemd)[1683]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:49.158976 systemd[1683]: Queued start job for default target default.target. Mar 17 18:48:49.159642 systemd[1683]: Reached target paths.target. Mar 17 18:48:49.159670 systemd[1683]: Reached target sockets.target. Mar 17 18:48:49.159681 systemd[1683]: Reached target timers.target. Mar 17 18:48:49.159691 systemd[1683]: Reached target basic.target. Mar 17 18:48:49.159740 systemd[1683]: Reached target default.target. Mar 17 18:48:49.159761 systemd[1683]: Startup finished in 196ms. Mar 17 18:48:49.159843 systemd[1]: Started user@500.service. Mar 17 18:48:49.160869 systemd[1]: Started session-2.scope. Mar 17 18:48:49.854428 login[1677]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 18:48:49.858132 systemd-logind[1542]: New session 1 of user core. Mar 17 18:48:49.859015 systemd[1]: Started session-1.scope. Mar 17 18:48:53.435538 waagent[1671]: 2025-03-17T18:48:53.435419Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Mar 17 18:48:53.467116 waagent[1671]: 2025-03-17T18:48:53.467022Z INFO Daemon Daemon OS: flatcar 3510.3.7 Mar 17 18:48:53.471918 waagent[1671]: 2025-03-17T18:48:53.471843Z INFO Daemon Daemon Python: 3.9.16 Mar 17 18:48:53.476689 waagent[1671]: 2025-03-17T18:48:53.476578Z INFO Daemon Daemon Run daemon Mar 17 18:48:53.481102 waagent[1671]: 2025-03-17T18:48:53.481034Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.7' Mar 17 18:48:53.499229 waagent[1671]: 2025-03-17T18:48:53.499075Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Mar 17 18:48:53.515267 waagent[1671]: 2025-03-17T18:48:53.515118Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 17 18:48:53.526059 waagent[1671]: 2025-03-17T18:48:53.525966Z INFO Daemon Daemon cloud-init is enabled: False Mar 17 18:48:53.531668 waagent[1671]: 2025-03-17T18:48:53.531583Z INFO Daemon Daemon Using waagent for provisioning Mar 17 18:48:53.537926 waagent[1671]: 2025-03-17T18:48:53.537852Z INFO Daemon Daemon Activate resource disk Mar 17 18:48:53.542873 waagent[1671]: 2025-03-17T18:48:53.542787Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 17 18:48:53.557819 waagent[1671]: 2025-03-17T18:48:53.557735Z INFO Daemon Daemon Found device: None Mar 17 18:48:53.562836 waagent[1671]: 2025-03-17T18:48:53.562754Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 17 18:48:53.571516 waagent[1671]: 2025-03-17T18:48:53.571441Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 17 18:48:53.583920 waagent[1671]: 2025-03-17T18:48:53.583848Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 18:48:53.590153 waagent[1671]: 2025-03-17T18:48:53.590079Z INFO Daemon Daemon Running default provisioning handler Mar 17 18:48:53.604738 waagent[1671]: 2025-03-17T18:48:53.604569Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Mar 17 18:48:53.621634 waagent[1671]: 2025-03-17T18:48:53.621481Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 17 18:48:53.632616 waagent[1671]: 2025-03-17T18:48:53.632530Z INFO Daemon Daemon cloud-init is enabled: False Mar 17 18:48:53.638356 waagent[1671]: 2025-03-17T18:48:53.638280Z INFO Daemon Daemon Copying ovf-env.xml Mar 17 18:48:53.746799 waagent[1671]: 2025-03-17T18:48:53.744580Z INFO Daemon Daemon Successfully mounted dvd Mar 17 18:48:53.848364 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 17 18:48:53.904352 waagent[1671]: 2025-03-17T18:48:53.904190Z INFO Daemon Daemon Detect protocol endpoint Mar 17 18:48:53.909910 waagent[1671]: 2025-03-17T18:48:53.909802Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 18:48:53.916479 waagent[1671]: 2025-03-17T18:48:53.916386Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 17 18:48:53.923564 waagent[1671]: 2025-03-17T18:48:53.923476Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 17 18:48:53.929555 waagent[1671]: 2025-03-17T18:48:53.929471Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 17 18:48:53.935119 waagent[1671]: 2025-03-17T18:48:53.935037Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 17 18:48:54.064921 waagent[1671]: 2025-03-17T18:48:54.064780Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 17 18:48:54.072364 waagent[1671]: 2025-03-17T18:48:54.072320Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 17 18:48:54.078020 waagent[1671]: 2025-03-17T18:48:54.077958Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 17 18:48:54.584895 waagent[1671]: 2025-03-17T18:48:54.584714Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 17 18:48:54.603169 waagent[1671]: 2025-03-17T18:48:54.603076Z INFO Daemon Daemon Forcing an update of the goal state.. Mar 17 18:48:54.609777 waagent[1671]: 2025-03-17T18:48:54.609695Z INFO Daemon Daemon Fetching goal state [incarnation 1] Mar 17 18:48:54.706220 waagent[1671]: 2025-03-17T18:48:54.706023Z INFO Daemon Daemon Found private key matching thumbprint 4B4C2F9F762367CFC53E29AA17BBDCE1C67FEE07 Mar 17 18:48:54.716021 waagent[1671]: 2025-03-17T18:48:54.715912Z INFO Daemon Daemon Certificate with thumbprint 5A2EFE6206636ACFA4F3F71D3B774842D91B5181 has no matching private key. Mar 17 18:48:54.727097 waagent[1671]: 2025-03-17T18:48:54.727012Z INFO Daemon Daemon Fetch goal state completed Mar 17 18:48:54.779671 waagent[1671]: 2025-03-17T18:48:54.779613Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 260588ef-a4d0-4d6b-b1d5-61e72cc6653d New eTag: 6129956582777589936] Mar 17 18:48:54.790584 waagent[1671]: 2025-03-17T18:48:54.790501Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Mar 17 18:48:54.809969 waagent[1671]: 2025-03-17T18:48:54.809902Z INFO Daemon Daemon Starting provisioning Mar 17 18:48:54.815439 waagent[1671]: 2025-03-17T18:48:54.815354Z INFO Daemon Daemon Handle ovf-env.xml. Mar 17 18:48:54.820556 waagent[1671]: 2025-03-17T18:48:54.820485Z INFO Daemon Daemon Set hostname [ci-3510.3.7-a-2597755324] Mar 17 18:48:54.855500 waagent[1671]: 2025-03-17T18:48:54.855356Z INFO Daemon Daemon Publish hostname [ci-3510.3.7-a-2597755324] Mar 17 18:48:54.862291 waagent[1671]: 2025-03-17T18:48:54.862199Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 17 18:48:54.869154 waagent[1671]: 2025-03-17T18:48:54.869078Z INFO Daemon Daemon Primary interface is [eth0] Mar 17 18:48:54.885750 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Mar 17 18:48:54.885993 systemd[1]: Stopped systemd-networkd-wait-online.service. Mar 17 18:48:54.886051 systemd[1]: Stopping systemd-networkd-wait-online.service... Mar 17 18:48:54.886247 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:48:54.889898 systemd-networkd[1275]: eth0: DHCPv6 lease lost Mar 17 18:48:54.891678 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:48:54.891946 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:48:54.893953 systemd[1]: Starting systemd-networkd.service... Mar 17 18:48:54.928254 systemd-networkd[1729]: enP62433s1: Link UP Mar 17 18:48:54.928266 systemd-networkd[1729]: enP62433s1: Gained carrier Mar 17 18:48:54.929129 systemd-networkd[1729]: eth0: Link UP Mar 17 18:48:54.929138 systemd-networkd[1729]: eth0: Gained carrier Mar 17 18:48:54.929439 systemd-networkd[1729]: lo: Link UP Mar 17 18:48:54.929447 systemd-networkd[1729]: lo: Gained carrier Mar 17 18:48:54.929672 systemd-networkd[1729]: eth0: Gained IPv6LL Mar 17 18:48:54.930678 systemd-networkd[1729]: Enumeration completed Mar 17 18:48:54.930813 systemd[1]: Started systemd-networkd.service. Mar 17 18:48:54.932735 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:48:54.937976 waagent[1671]: 2025-03-17T18:48:54.933930Z INFO Daemon Daemon Create user account if not exists Mar 17 18:48:54.940797 waagent[1671]: 2025-03-17T18:48:54.940462Z INFO Daemon Daemon User core already exists, skip useradd Mar 17 18:48:54.941352 systemd-networkd[1729]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:48:54.947350 waagent[1671]: 2025-03-17T18:48:54.947249Z INFO Daemon Daemon Configure sudoer Mar 17 18:48:54.952556 waagent[1671]: 2025-03-17T18:48:54.952477Z INFO Daemon Daemon Configure sshd Mar 17 18:48:54.957124 waagent[1671]: 2025-03-17T18:48:54.957050Z INFO Daemon Daemon Deploy ssh public key. Mar 17 18:48:54.969937 systemd-networkd[1729]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 18:48:54.981000 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:48:56.172138 waagent[1671]: 2025-03-17T18:48:56.172069Z INFO Daemon Daemon Provisioning complete Mar 17 18:48:56.192748 waagent[1671]: 2025-03-17T18:48:56.192681Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 17 18:48:56.199336 waagent[1671]: 2025-03-17T18:48:56.199249Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 17 18:48:56.210789 waagent[1671]: 2025-03-17T18:48:56.210675Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Mar 17 18:48:56.528586 waagent[1739]: 2025-03-17T18:48:56.528422Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Mar 17 18:48:56.531187 waagent[1739]: 2025-03-17T18:48:56.531113Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:48:56.531337 waagent[1739]: 2025-03-17T18:48:56.531289Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:48:56.545243 waagent[1739]: 2025-03-17T18:48:56.545158Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Mar 17 18:48:56.545423 waagent[1739]: 2025-03-17T18:48:56.545375Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Mar 17 18:48:56.630298 waagent[1739]: 2025-03-17T18:48:56.630146Z INFO ExtHandler ExtHandler Found private key matching thumbprint 4B4C2F9F762367CFC53E29AA17BBDCE1C67FEE07 Mar 17 18:48:56.630534 waagent[1739]: 2025-03-17T18:48:56.630480Z INFO ExtHandler ExtHandler Certificate with thumbprint 5A2EFE6206636ACFA4F3F71D3B774842D91B5181 has no matching private key. Mar 17 18:48:56.630784 waagent[1739]: 2025-03-17T18:48:56.630720Z INFO ExtHandler ExtHandler Fetch goal state completed Mar 17 18:48:56.652972 waagent[1739]: 2025-03-17T18:48:56.652912Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: e9f20be0-445a-4422-ad7a-a96994a9249f New eTag: 6129956582777589936] Mar 17 18:48:56.653575 waagent[1739]: 2025-03-17T18:48:56.653513Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Mar 17 18:48:56.742789 waagent[1739]: 2025-03-17T18:48:56.742611Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 17 18:48:56.755441 waagent[1739]: 2025-03-17T18:48:56.754708Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1739 Mar 17 18:48:56.762508 waagent[1739]: 2025-03-17T18:48:56.762408Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] Mar 17 18:48:56.764127 waagent[1739]: 2025-03-17T18:48:56.764056Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 17 18:48:56.867441 waagent[1739]: 2025-03-17T18:48:56.867330Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 17 18:48:56.868040 waagent[1739]: 2025-03-17T18:48:56.867982Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 17 18:48:56.876147 waagent[1739]: 2025-03-17T18:48:56.876091Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 17 18:48:56.876881 waagent[1739]: 2025-03-17T18:48:56.876806Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Mar 17 18:48:56.878228 waagent[1739]: 2025-03-17T18:48:56.878164Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Mar 17 18:48:56.879738 waagent[1739]: 2025-03-17T18:48:56.879667Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 17 18:48:56.880039 waagent[1739]: 2025-03-17T18:48:56.879969Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:48:56.880812 waagent[1739]: 2025-03-17T18:48:56.880733Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:48:56.881466 waagent[1739]: 2025-03-17T18:48:56.881399Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 17 18:48:56.881783 waagent[1739]: 2025-03-17T18:48:56.881725Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 17 18:48:56.881783 waagent[1739]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 17 18:48:56.881783 waagent[1739]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 17 18:48:56.881783 waagent[1739]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 17 18:48:56.881783 waagent[1739]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:48:56.881783 waagent[1739]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:48:56.881783 waagent[1739]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:48:56.884292 waagent[1739]: 2025-03-17T18:48:56.884115Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 17 18:48:56.885164 waagent[1739]: 2025-03-17T18:48:56.885094Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:48:56.885362 waagent[1739]: 2025-03-17T18:48:56.885305Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:48:56.886002 waagent[1739]: 2025-03-17T18:48:56.885938Z INFO EnvHandler ExtHandler Configure routes Mar 17 18:48:56.886162 waagent[1739]: 2025-03-17T18:48:56.886115Z INFO EnvHandler ExtHandler Gateway:None Mar 17 18:48:56.886278 waagent[1739]: 2025-03-17T18:48:56.886236Z INFO EnvHandler ExtHandler Routes:None Mar 17 18:48:56.887202 waagent[1739]: 2025-03-17T18:48:56.887139Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 17 18:48:56.887368 waagent[1739]: 2025-03-17T18:48:56.887295Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 17 18:48:56.888163 waagent[1739]: 2025-03-17T18:48:56.888069Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 17 18:48:56.888330 waagent[1739]: 2025-03-17T18:48:56.888263Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 17 18:48:56.888620 waagent[1739]: 2025-03-17T18:48:56.888555Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 17 18:48:56.902159 waagent[1739]: 2025-03-17T18:48:56.902070Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Mar 17 18:48:56.902985 waagent[1739]: 2025-03-17T18:48:56.902934Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Mar 17 18:48:56.904067 waagent[1739]: 2025-03-17T18:48:56.904012Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Mar 17 18:48:56.945343 waagent[1739]: 2025-03-17T18:48:56.945202Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1729' Mar 17 18:48:56.992789 waagent[1739]: 2025-03-17T18:48:56.992719Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Mar 17 18:48:57.028755 waagent[1739]: 2025-03-17T18:48:57.028574Z INFO MonitorHandler ExtHandler Network interfaces: Mar 17 18:48:57.028755 waagent[1739]: Executing ['ip', '-a', '-o', 'link']: Mar 17 18:48:57.028755 waagent[1739]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 17 18:48:57.028755 waagent[1739]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bd:a3:9a brd ff:ff:ff:ff:ff:ff Mar 17 18:48:57.028755 waagent[1739]: 3: enP62433s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bd:a3:9a brd ff:ff:ff:ff:ff:ff\ altname enP62433p0s2 Mar 17 18:48:57.028755 waagent[1739]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 17 18:48:57.028755 waagent[1739]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 17 18:48:57.028755 waagent[1739]: 2: eth0 inet 10.200.20.37/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 17 18:48:57.028755 waagent[1739]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 17 18:48:57.028755 waagent[1739]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Mar 17 18:48:57.028755 waagent[1739]: 2: eth0 inet6 fe80::222:48ff:febd:a39a/64 scope link \ valid_lft forever preferred_lft forever Mar 17 18:48:57.274874 waagent[1739]: 2025-03-17T18:48:57.274744Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Mar 17 18:48:57.605257 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:48:57.605447 systemd[1]: Stopped kubelet.service. Mar 17 18:48:57.606971 systemd[1]: Starting kubelet.service... Mar 17 18:48:57.695433 systemd[1]: Started kubelet.service. Mar 17 18:48:57.764125 kubelet[1785]: E0317 18:48:57.764072 1785 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:48:57.766707 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:48:57.766879 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:48:58.215855 waagent[1671]: 2025-03-17T18:48:58.215484Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Mar 17 18:48:58.220701 waagent[1671]: 2025-03-17T18:48:58.220645Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Mar 17 18:48:59.499156 waagent[1796]: 2025-03-17T18:48:59.499049Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Mar 17 18:48:59.499871 waagent[1796]: 2025-03-17T18:48:59.499790Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.7 Mar 17 18:48:59.500009 waagent[1796]: 2025-03-17T18:48:59.499964Z INFO ExtHandler ExtHandler Python: 3.9.16 Mar 17 18:48:59.500133 waagent[1796]: 2025-03-17T18:48:59.500089Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Mar 17 18:48:59.508460 waagent[1796]: 2025-03-17T18:48:59.508302Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 17 18:48:59.508977 waagent[1796]: 2025-03-17T18:48:59.508910Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:48:59.509137 waagent[1796]: 2025-03-17T18:48:59.509091Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:48:59.523338 waagent[1796]: 2025-03-17T18:48:59.523248Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 17 18:48:59.536257 waagent[1796]: 2025-03-17T18:48:59.536201Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Mar 17 18:48:59.537360 waagent[1796]: 2025-03-17T18:48:59.537301Z INFO ExtHandler Mar 17 18:48:59.537531 waagent[1796]: 2025-03-17T18:48:59.537468Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 756aec7e-4910-4100-a01c-e2e3c294d328 eTag: 6129956582777589936 source: Fabric] Mar 17 18:48:59.538269 waagent[1796]: 2025-03-17T18:48:59.538212Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 17 18:48:59.539482 waagent[1796]: 2025-03-17T18:48:59.539420Z INFO ExtHandler Mar 17 18:48:59.539615 waagent[1796]: 2025-03-17T18:48:59.539570Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 17 18:48:59.553333 waagent[1796]: 2025-03-17T18:48:59.553280Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 17 18:48:59.553863 waagent[1796]: 2025-03-17T18:48:59.553798Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Mar 17 18:48:59.575073 waagent[1796]: 2025-03-17T18:48:59.575008Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Mar 17 18:48:59.652417 waagent[1796]: 2025-03-17T18:48:59.652266Z INFO ExtHandler Downloaded certificate {'thumbprint': '5A2EFE6206636ACFA4F3F71D3B774842D91B5181', 'hasPrivateKey': False} Mar 17 18:48:59.653568 waagent[1796]: 2025-03-17T18:48:59.653506Z INFO ExtHandler Downloaded certificate {'thumbprint': '4B4C2F9F762367CFC53E29AA17BBDCE1C67FEE07', 'hasPrivateKey': True} Mar 17 18:48:59.654626 waagent[1796]: 2025-03-17T18:48:59.654567Z INFO ExtHandler Fetch goal state completed Mar 17 18:48:59.676098 waagent[1796]: 2025-03-17T18:48:59.675972Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Mar 17 18:48:59.688800 waagent[1796]: 2025-03-17T18:48:59.688688Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1796 Mar 17 18:48:59.692166 waagent[1796]: 2025-03-17T18:48:59.692091Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] Mar 17 18:48:59.693294 waagent[1796]: 2025-03-17T18:48:59.693234Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Mar 17 18:48:59.693611 waagent[1796]: 2025-03-17T18:48:59.693557Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Mar 17 18:48:59.695741 waagent[1796]: 2025-03-17T18:48:59.695680Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 17 18:48:59.700786 waagent[1796]: 2025-03-17T18:48:59.700717Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 17 18:48:59.701218 waagent[1796]: 2025-03-17T18:48:59.701155Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 17 18:48:59.710063 waagent[1796]: 2025-03-17T18:48:59.710000Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 17 18:48:59.710992 waagent[1796]: 2025-03-17T18:48:59.710922Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Mar 17 18:48:59.719049 waagent[1796]: 2025-03-17T18:48:59.718899Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 17 18:48:59.720481 waagent[1796]: 2025-03-17T18:48:59.720398Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Mar 17 18:48:59.722440 waagent[1796]: 2025-03-17T18:48:59.722360Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 17 18:48:59.722869 waagent[1796]: 2025-03-17T18:48:59.722772Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:48:59.723394 waagent[1796]: 2025-03-17T18:48:59.723326Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:48:59.724126 waagent[1796]: 2025-03-17T18:48:59.724050Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 17 18:48:59.724454 waagent[1796]: 2025-03-17T18:48:59.724394Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 17 18:48:59.724454 waagent[1796]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 17 18:48:59.724454 waagent[1796]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 17 18:48:59.724454 waagent[1796]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 17 18:48:59.724454 waagent[1796]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:48:59.724454 waagent[1796]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:48:59.724454 waagent[1796]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 18:48:59.727062 waagent[1796]: 2025-03-17T18:48:59.726925Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 17 18:48:59.727758 waagent[1796]: 2025-03-17T18:48:59.727682Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 18:48:59.730205 waagent[1796]: 2025-03-17T18:48:59.728411Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 18:48:59.731318 waagent[1796]: 2025-03-17T18:48:59.731197Z INFO EnvHandler ExtHandler Configure routes Mar 17 18:48:59.731479 waagent[1796]: 2025-03-17T18:48:59.731426Z INFO EnvHandler ExtHandler Gateway:None Mar 17 18:48:59.731733 waagent[1796]: 2025-03-17T18:48:59.731664Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 17 18:48:59.731882 waagent[1796]: 2025-03-17T18:48:59.731774Z INFO EnvHandler ExtHandler Routes:None Mar 17 18:48:59.732390 waagent[1796]: 2025-03-17T18:48:59.732294Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 17 18:48:59.738394 waagent[1796]: 2025-03-17T18:48:59.738196Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 17 18:48:59.738949 waagent[1796]: 2025-03-17T18:48:59.738855Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 17 18:48:59.741234 waagent[1796]: 2025-03-17T18:48:59.741170Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 17 18:48:59.743919 waagent[1796]: 2025-03-17T18:48:59.743843Z INFO MonitorHandler ExtHandler Network interfaces: Mar 17 18:48:59.743919 waagent[1796]: Executing ['ip', '-a', '-o', 'link']: Mar 17 18:48:59.743919 waagent[1796]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 17 18:48:59.743919 waagent[1796]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bd:a3:9a brd ff:ff:ff:ff:ff:ff Mar 17 18:48:59.743919 waagent[1796]: 3: enP62433s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bd:a3:9a brd ff:ff:ff:ff:ff:ff\ altname enP62433p0s2 Mar 17 18:48:59.743919 waagent[1796]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 17 18:48:59.743919 waagent[1796]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 17 18:48:59.743919 waagent[1796]: 2: eth0 inet 10.200.20.37/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 17 18:48:59.743919 waagent[1796]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 17 18:48:59.743919 waagent[1796]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Mar 17 18:48:59.743919 waagent[1796]: 2: eth0 inet6 fe80::222:48ff:febd:a39a/64 scope link \ valid_lft forever preferred_lft forever Mar 17 18:48:59.762975 waagent[1796]: 2025-03-17T18:48:59.762841Z INFO ExtHandler ExtHandler Downloading agent manifest Mar 17 18:48:59.799707 waagent[1796]: 2025-03-17T18:48:59.799623Z INFO ExtHandler ExtHandler Mar 17 18:48:59.800423 waagent[1796]: 2025-03-17T18:48:59.800352Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 8449083f-7387-46b4-82f3-9508c6b3d743 correlation cf49a652-ac9b-48ec-afe7-f718342f228f created: 2025-03-17T18:47:32.237259Z] Mar 17 18:48:59.803707 waagent[1796]: 2025-03-17T18:48:59.803608Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 17 18:48:59.811080 waagent[1796]: 2025-03-17T18:48:59.810946Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 11 ms] Mar 17 18:48:59.837297 waagent[1796]: 2025-03-17T18:48:59.837216Z INFO ExtHandler ExtHandler Looking for existing remote access users. Mar 17 18:48:59.866132 waagent[1796]: 2025-03-17T18:48:59.865956Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F7A6605F-B921-46A7-8641-D22050D742B1;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Mar 17 18:48:59.912438 waagent[1796]: 2025-03-17T18:48:59.912297Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Mar 17 18:48:59.912438 waagent[1796]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:48:59.912438 waagent[1796]: pkts bytes target prot opt in out source destination Mar 17 18:48:59.912438 waagent[1796]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:48:59.912438 waagent[1796]: pkts bytes target prot opt in out source destination Mar 17 18:48:59.912438 waagent[1796]: Chain OUTPUT (policy ACCEPT 2 packets, 104 bytes) Mar 17 18:48:59.912438 waagent[1796]: pkts bytes target prot opt in out source destination Mar 17 18:48:59.912438 waagent[1796]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 17 18:48:59.912438 waagent[1796]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 18:48:59.912438 waagent[1796]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 18:48:59.920591 waagent[1796]: 2025-03-17T18:48:59.920412Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 17 18:48:59.920591 waagent[1796]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:48:59.920591 waagent[1796]: pkts bytes target prot opt in out source destination Mar 17 18:48:59.920591 waagent[1796]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 18:48:59.920591 waagent[1796]: pkts bytes target prot opt in out source destination Mar 17 18:48:59.920591 waagent[1796]: Chain OUTPUT (policy ACCEPT 2 packets, 104 bytes) Mar 17 18:48:59.920591 waagent[1796]: pkts bytes target prot opt in out source destination Mar 17 18:48:59.920591 waagent[1796]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 17 18:48:59.920591 waagent[1796]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 18:48:59.920591 waagent[1796]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 18:48:59.921174 waagent[1796]: 2025-03-17T18:48:59.921122Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 17 18:49:07.986569 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:49:07.986733 systemd[1]: Stopped kubelet.service. Mar 17 18:49:07.988218 systemd[1]: Starting kubelet.service... Mar 17 18:49:08.217416 systemd[1]: Started kubelet.service. Mar 17 18:49:08.267856 kubelet[1852]: E0317 18:49:08.267733 1852 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:08.269698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:08.269859 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:18.486559 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 18:49:18.486756 systemd[1]: Stopped kubelet.service. Mar 17 18:49:18.488299 systemd[1]: Starting kubelet.service... Mar 17 18:49:18.682419 systemd[1]: Started kubelet.service. Mar 17 18:49:18.734603 kubelet[1867]: E0317 18:49:18.734537 1867 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:18.736538 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:18.736696 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:26.090068 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 17 18:49:28.986558 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 18:49:28.986738 systemd[1]: Stopped kubelet.service. Mar 17 18:49:28.988281 systemd[1]: Starting kubelet.service... Mar 17 18:49:29.220623 systemd[1]: Started kubelet.service. Mar 17 18:49:29.261721 kubelet[1883]: E0317 18:49:29.261618 1883 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:29.263869 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:29.264017 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:32.212427 update_engine[1546]: I0317 18:49:32.212059 1546 update_attempter.cc:509] Updating boot flags... Mar 17 18:49:39.446189 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 18:49:39.446952 systemd[1]: Created slice system-sshd.slice. Mar 17 18:49:39.447054 systemd[1]: Stopped kubelet.service. Mar 17 18:49:39.448363 systemd[1]: Starting kubelet.service... Mar 17 18:49:39.449483 systemd[1]: Started sshd@0-10.200.20.37:22-10.200.16.10:46114.service. Mar 17 18:49:39.774199 systemd[1]: Started kubelet.service. Mar 17 18:49:39.825362 kubelet[1941]: E0317 18:49:39.825313 1941 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:39.828101 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:39.828266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:40.332543 sshd[1933]: Accepted publickey for core from 10.200.16.10 port 46114 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:49:40.351306 sshd[1933]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:40.355169 systemd-logind[1542]: New session 3 of user core. Mar 17 18:49:40.355586 systemd[1]: Started session-3.scope. Mar 17 18:49:40.722972 systemd[1]: Started sshd@1-10.200.20.37:22-10.200.16.10:46122.service. Mar 17 18:49:41.207709 sshd[1952]: Accepted publickey for core from 10.200.16.10 port 46122 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:49:41.210319 sshd[1952]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:41.214517 systemd-logind[1542]: New session 4 of user core. Mar 17 18:49:41.215034 systemd[1]: Started session-4.scope. Mar 17 18:49:41.557084 sshd[1952]: pam_unix(sshd:session): session closed for user core Mar 17 18:49:41.560112 systemd[1]: sshd@1-10.200.20.37:22-10.200.16.10:46122.service: Deactivated successfully. Mar 17 18:49:41.560908 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:49:41.561845 systemd-logind[1542]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:49:41.562786 systemd-logind[1542]: Removed session 4. Mar 17 18:49:41.625807 systemd[1]: Started sshd@2-10.200.20.37:22-10.200.16.10:46130.service. Mar 17 18:49:42.061524 sshd[1959]: Accepted publickey for core from 10.200.16.10 port 46130 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:49:42.063184 sshd[1959]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:42.067070 systemd-logind[1542]: New session 5 of user core. Mar 17 18:49:42.067488 systemd[1]: Started session-5.scope. Mar 17 18:49:42.373927 sshd[1959]: pam_unix(sshd:session): session closed for user core Mar 17 18:49:42.377753 systemd[1]: sshd@2-10.200.20.37:22-10.200.16.10:46130.service: Deactivated successfully. Mar 17 18:49:42.378921 systemd-logind[1542]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:49:42.379143 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:49:42.380039 systemd-logind[1542]: Removed session 5. Mar 17 18:49:42.451858 systemd[1]: Started sshd@3-10.200.20.37:22-10.200.16.10:46136.service. Mar 17 18:49:42.924611 sshd[1966]: Accepted publickey for core from 10.200.16.10 port 46136 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:49:42.925923 sshd[1966]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:42.929916 systemd-logind[1542]: New session 6 of user core. Mar 17 18:49:42.930356 systemd[1]: Started session-6.scope. Mar 17 18:49:43.284032 sshd[1966]: pam_unix(sshd:session): session closed for user core Mar 17 18:49:43.287017 systemd[1]: sshd@3-10.200.20.37:22-10.200.16.10:46136.service: Deactivated successfully. Mar 17 18:49:43.288382 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:49:43.289040 systemd-logind[1542]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:49:43.290019 systemd-logind[1542]: Removed session 6. Mar 17 18:49:43.364643 systemd[1]: Started sshd@4-10.200.20.37:22-10.200.16.10:46150.service. Mar 17 18:49:43.848165 sshd[1973]: Accepted publickey for core from 10.200.16.10 port 46150 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:49:43.849441 sshd[1973]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:43.853379 systemd-logind[1542]: New session 7 of user core. Mar 17 18:49:43.853908 systemd[1]: Started session-7.scope. Mar 17 18:49:44.354512 sudo[1977]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:49:44.354721 sudo[1977]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:49:44.375040 systemd[1]: Starting docker.service... Mar 17 18:49:44.406005 env[1987]: time="2025-03-17T18:49:44.405954069Z" level=info msg="Starting up" Mar 17 18:49:44.407640 env[1987]: time="2025-03-17T18:49:44.407604262Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:49:44.407640 env[1987]: time="2025-03-17T18:49:44.407631782Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:49:44.407767 env[1987]: time="2025-03-17T18:49:44.407655902Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:49:44.407767 env[1987]: time="2025-03-17T18:49:44.407666742Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:49:44.409680 env[1987]: time="2025-03-17T18:49:44.409651894Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:49:44.409680 env[1987]: time="2025-03-17T18:49:44.409673374Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:49:44.409788 env[1987]: time="2025-03-17T18:49:44.409689174Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:49:44.409788 env[1987]: time="2025-03-17T18:49:44.409699814Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:49:44.417138 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2084588131-merged.mount: Deactivated successfully. Mar 17 18:49:44.580073 env[1987]: time="2025-03-17T18:49:44.580038400Z" level=warning msg="Your kernel does not support cgroup blkio weight" Mar 17 18:49:44.580266 env[1987]: time="2025-03-17T18:49:44.580253200Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Mar 17 18:49:44.580474 env[1987]: time="2025-03-17T18:49:44.580458559Z" level=info msg="Loading containers: start." Mar 17 18:49:44.815850 kernel: Initializing XFRM netlink socket Mar 17 18:49:44.834938 env[1987]: time="2025-03-17T18:49:44.834890942Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:49:44.995496 systemd-networkd[1729]: docker0: Link UP Mar 17 18:49:45.026122 env[1987]: time="2025-03-17T18:49:45.026084616Z" level=info msg="Loading containers: done." Mar 17 18:49:45.036332 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3682606907-merged.mount: Deactivated successfully. Mar 17 18:49:45.053435 env[1987]: time="2025-03-17T18:49:45.053388671Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:49:45.053602 env[1987]: time="2025-03-17T18:49:45.053577749Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:49:45.053693 env[1987]: time="2025-03-17T18:49:45.053671368Z" level=info msg="Daemon has completed initialization" Mar 17 18:49:45.093803 systemd[1]: Started docker.service. Mar 17 18:49:45.099712 env[1987]: time="2025-03-17T18:49:45.099645783Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:49:48.842345 env[1558]: time="2025-03-17T18:49:48.842305105Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 18:49:49.869749 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 17 18:49:49.869899 systemd[1]: Stopped kubelet.service. Mar 17 18:49:49.871429 systemd[1]: Starting kubelet.service... Mar 17 18:49:49.880968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3654176958.mount: Deactivated successfully. Mar 17 18:49:49.963497 systemd[1]: Started kubelet.service. Mar 17 18:49:50.010637 kubelet[2116]: E0317 18:49:50.010582 2116 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:50.012521 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:50.012675 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:52.192169 env[1558]: time="2025-03-17T18:49:52.192110822Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:52.199101 env[1558]: time="2025-03-17T18:49:52.199056640Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:52.203447 env[1558]: time="2025-03-17T18:49:52.203404691Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:52.207157 env[1558]: time="2025-03-17T18:49:52.207122536Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:52.207782 env[1558]: time="2025-03-17T18:49:52.207749142Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Mar 17 18:49:52.217924 env[1558]: time="2025-03-17T18:49:52.217880662Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 18:49:54.656744 env[1558]: time="2025-03-17T18:49:54.656701512Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:54.670086 env[1558]: time="2025-03-17T18:49:54.670039939Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:54.678974 env[1558]: time="2025-03-17T18:49:54.678938489Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:54.686439 env[1558]: time="2025-03-17T18:49:54.686405405Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:54.687269 env[1558]: time="2025-03-17T18:49:54.687238902Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Mar 17 18:49:54.696528 env[1558]: time="2025-03-17T18:49:54.696489832Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 18:49:56.439686 env[1558]: time="2025-03-17T18:49:56.439632450Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:56.451759 env[1558]: time="2025-03-17T18:49:56.451710003Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:56.456949 env[1558]: time="2025-03-17T18:49:56.456910276Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:56.464728 env[1558]: time="2025-03-17T18:49:56.464665493Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:56.465760 env[1558]: time="2025-03-17T18:49:56.465718602Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Mar 17 18:49:56.475567 env[1558]: time="2025-03-17T18:49:56.475525925Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 18:49:57.605568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2287485590.mount: Deactivated successfully. Mar 17 18:49:58.408391 env[1558]: time="2025-03-17T18:49:58.408335714Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:58.414519 env[1558]: time="2025-03-17T18:49:58.414481965Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:58.419086 env[1558]: time="2025-03-17T18:49:58.419050580Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:58.422197 env[1558]: time="2025-03-17T18:49:58.422166659Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:49:58.422513 env[1558]: time="2025-03-17T18:49:58.422482530Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 17 18:49:58.431652 env[1558]: time="2025-03-17T18:49:58.431617120Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:49:59.139553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1346819221.mount: Deactivated successfully. Mar 17 18:50:00.236538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 17 18:50:00.236718 systemd[1]: Stopped kubelet.service. Mar 17 18:50:00.238257 systemd[1]: Starting kubelet.service... Mar 17 18:50:00.420432 systemd[1]: Started kubelet.service. Mar 17 18:50:00.468605 kubelet[2152]: E0317 18:50:00.468541 2152 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:00.470521 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:00.470678 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:00.869195 env[1558]: time="2025-03-17T18:50:00.869146741Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:00.878914 env[1558]: time="2025-03-17T18:50:00.878873398Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:00.888167 env[1558]: time="2025-03-17T18:50:00.888117325Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:00.896195 env[1558]: time="2025-03-17T18:50:00.896148509Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:00.897097 env[1558]: time="2025-03-17T18:50:00.897069694Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 18:50:00.907087 env[1558]: time="2025-03-17T18:50:00.907046314Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 18:50:01.583064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3978442318.mount: Deactivated successfully. Mar 17 18:50:01.614184 env[1558]: time="2025-03-17T18:50:01.614144788Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:01.628910 env[1558]: time="2025-03-17T18:50:01.628871049Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:01.635802 env[1558]: time="2025-03-17T18:50:01.635752868Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:01.642248 env[1558]: time="2025-03-17T18:50:01.642212387Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:01.642671 env[1558]: time="2025-03-17T18:50:01.642643566Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Mar 17 18:50:01.651430 env[1558]: time="2025-03-17T18:50:01.651390319Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 18:50:02.337814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount272709445.mount: Deactivated successfully. Mar 17 18:50:06.106026 env[1558]: time="2025-03-17T18:50:06.105969109Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:06.112098 env[1558]: time="2025-03-17T18:50:06.112060746Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:06.118046 env[1558]: time="2025-03-17T18:50:06.118004482Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:06.123243 env[1558]: time="2025-03-17T18:50:06.123209350Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:06.124124 env[1558]: time="2025-03-17T18:50:06.124095599Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Mar 17 18:50:10.486535 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 17 18:50:10.486719 systemd[1]: Stopped kubelet.service. Mar 17 18:50:10.488196 systemd[1]: Starting kubelet.service... Mar 17 18:50:10.749590 systemd[1]: Started kubelet.service. Mar 17 18:50:10.822135 kubelet[2234]: E0317 18:50:10.822084 2234 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:10.823609 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:10.823755 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:11.142581 systemd[1]: Stopped kubelet.service. Mar 17 18:50:11.144771 systemd[1]: Starting kubelet.service... Mar 17 18:50:11.173146 systemd[1]: Reloading. Mar 17 18:50:11.255840 /usr/lib/systemd/system-generators/torcx-generator[2269]: time="2025-03-17T18:50:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:50:11.256178 /usr/lib/systemd/system-generators/torcx-generator[2269]: time="2025-03-17T18:50:11Z" level=info msg="torcx already run" Mar 17 18:50:11.342580 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:50:11.342744 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:50:11.361213 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:50:11.467909 systemd[1]: Started kubelet.service. Mar 17 18:50:11.471525 systemd[1]: Stopping kubelet.service... Mar 17 18:50:11.472279 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:50:11.472632 systemd[1]: Stopped kubelet.service. Mar 17 18:50:11.478550 systemd[1]: Starting kubelet.service... Mar 17 18:50:11.607813 systemd[1]: Started kubelet.service. Mar 17 18:50:11.650631 kubelet[2353]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:50:11.651040 kubelet[2353]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:50:11.651089 kubelet[2353]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:50:11.651212 kubelet[2353]: I0317 18:50:11.651175 2353 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:50:12.556505 kubelet[2353]: I0317 18:50:12.556470 2353 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:50:12.556676 kubelet[2353]: I0317 18:50:12.556666 2353 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:50:12.556997 kubelet[2353]: I0317 18:50:12.556982 2353 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:50:12.567939 kubelet[2353]: E0317 18:50:12.567908 2353 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.37:6443: connect: connection refused Mar 17 18:50:12.569757 kubelet[2353]: I0317 18:50:12.569735 2353 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:50:12.582016 kubelet[2353]: I0317 18:50:12.581987 2353 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:50:12.583327 kubelet[2353]: I0317 18:50:12.583286 2353 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:50:12.583507 kubelet[2353]: I0317 18:50:12.583331 2353 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-a-2597755324","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:50:12.583587 kubelet[2353]: I0317 18:50:12.583519 2353 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:50:12.583587 kubelet[2353]: I0317 18:50:12.583529 2353 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:50:12.583665 kubelet[2353]: I0317 18:50:12.583648 2353 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:50:12.584414 kubelet[2353]: I0317 18:50:12.584395 2353 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:50:12.584446 kubelet[2353]: I0317 18:50:12.584420 2353 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:50:12.584873 kubelet[2353]: W0317 18:50:12.584813 2353 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-2597755324&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Mar 17 18:50:12.584903 kubelet[2353]: E0317 18:50:12.584885 2353 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-2597755324&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Mar 17 18:50:12.584954 kubelet[2353]: I0317 18:50:12.584941 2353 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:50:12.584983 kubelet[2353]: I0317 18:50:12.584965 2353 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:50:12.591440 kubelet[2353]: I0317 18:50:12.591413 2353 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:50:12.591713 kubelet[2353]: I0317 18:50:12.591700 2353 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:50:12.591813 kubelet[2353]: W0317 18:50:12.591802 2353 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:50:12.592469 kubelet[2353]: I0317 18:50:12.592448 2353 server.go:1264] "Started kubelet" Mar 17 18:50:12.593318 kubelet[2353]: W0317 18:50:12.593269 2353 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Mar 17 18:50:12.593451 kubelet[2353]: E0317 18:50:12.593438 2353 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Mar 17 18:50:12.596457 kubelet[2353]: I0317 18:50:12.596418 2353 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:50:12.597449 kubelet[2353]: I0317 18:50:12.597430 2353 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:50:12.598588 kubelet[2353]: I0317 18:50:12.598544 2353 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:50:12.598929 kubelet[2353]: I0317 18:50:12.598915 2353 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:50:12.607928 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:50:12.608044 kubelet[2353]: E0317 18:50:12.601123 2353 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-a-2597755324.182dabb22bf79a25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-a-2597755324,UID:ci-3510.3.7-a-2597755324,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-a-2597755324,},FirstTimestamp:2025-03-17 18:50:12.592425509 +0000 UTC m=+0.977400728,LastTimestamp:2025-03-17 18:50:12.592425509 +0000 UTC m=+0.977400728,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-a-2597755324,}" Mar 17 18:50:12.608044 kubelet[2353]: E0317 18:50:12.602559 2353 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:50:12.608332 kubelet[2353]: I0317 18:50:12.608319 2353 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:50:12.610538 kubelet[2353]: I0317 18:50:12.610420 2353 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:50:12.610637 kubelet[2353]: I0317 18:50:12.610568 2353 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:50:12.610637 kubelet[2353]: I0317 18:50:12.610632 2353 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:50:12.611086 kubelet[2353]: W0317 18:50:12.611031 2353 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Mar 17 18:50:12.611086 kubelet[2353]: E0317 18:50:12.611083 2353 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Mar 17 18:50:12.612164 kubelet[2353]: E0317 18:50:12.612117 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-2597755324?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="200ms" Mar 17 18:50:12.613853 kubelet[2353]: I0317 18:50:12.613804 2353 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:50:12.613853 kubelet[2353]: I0317 18:50:12.613836 2353 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:50:12.613952 kubelet[2353]: I0317 18:50:12.613905 2353 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:50:12.684147 kubelet[2353]: I0317 18:50:12.684120 2353 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:50:12.684147 kubelet[2353]: I0317 18:50:12.684140 2353 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:50:12.684512 kubelet[2353]: I0317 18:50:12.684171 2353 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:50:12.689024 kubelet[2353]: I0317 18:50:12.688998 2353 policy_none.go:49] "None policy: Start" Mar 17 18:50:12.689720 kubelet[2353]: I0317 18:50:12.689706 2353 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:50:12.689815 kubelet[2353]: I0317 18:50:12.689803 2353 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:50:12.696400 kubelet[2353]: I0317 18:50:12.696367 2353 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:50:12.699021 kubelet[2353]: I0317 18:50:12.698999 2353 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:50:12.699364 kubelet[2353]: I0317 18:50:12.699326 2353 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:50:12.699618 kubelet[2353]: I0317 18:50:12.699607 2353 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:50:12.699785 kubelet[2353]: I0317 18:50:12.699772 2353 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:50:12.699884 kubelet[2353]: I0317 18:50:12.699864 2353 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:50:12.699963 kubelet[2353]: I0317 18:50:12.699955 2353 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:50:12.700073 kubelet[2353]: E0317 18:50:12.700061 2353 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 17 18:50:12.704501 kubelet[2353]: W0317 18:50:12.704443 2353 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Mar 17 18:50:12.704621 kubelet[2353]: E0317 18:50:12.704525 2353 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Mar 17 18:50:12.704739 kubelet[2353]: E0317 18:50:12.704618 2353 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-a-2597755324\" not found" Mar 17 18:50:12.712353 kubelet[2353]: I0317 18:50:12.712330 2353 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-2597755324" Mar 17 18:50:12.712872 kubelet[2353]: E0317 18:50:12.712837 2353 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-3510.3.7-a-2597755324" Mar 17 18:50:12.803145 kubelet[2353]: I0317 18:50:12.803111 2353 topology_manager.go:215] "Topology Admit Handler" podUID="1c554e0899926e1606075c2429898f64" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.7-a-2597755324" Mar 17 18:50:12.804793 kubelet[2353]: I0317 18:50:12.804749 2353 topology_manager.go:215] "Topology Admit Handler" podUID="22a24d2e211ba3d73aab186669c8ea6c" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.7-a-2597755324" Mar 17 18:50:12.809543 kubelet[2353]: I0317 18:50:12.807878 2353 topology_manager.go:215] "Topology Admit Handler" podUID="c260ad68a77690ccc7f604dc21180535" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.7-a-2597755324" Mar 17 18:50:12.810938 kubelet[2353]: I0317 18:50:12.810902 2353 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1c554e0899926e1606075c2429898f64-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-a-2597755324\" (UID: \"1c554e0899926e1606075c2429898f64\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2597755324" Mar 17 18:50:12.810938 kubelet[2353]: I0317 18:50:12.810939 2353 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c554e0899926e1606075c2429898f64-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-a-2597755324\" (UID: \"1c554e0899926e1606075c2429898f64\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2597755324" Mar 17 18:50:12.811052 kubelet[2353]: I0317 18:50:12.810958 2353 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c554e0899926e1606075c2429898f64-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-a-2597755324\" (UID: \"1c554e0899926e1606075c2429898f64\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2597755324" Mar 17 18:50:12.811052 kubelet[2353]: I0317 18:50:12.810975 2353 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c554e0899926e1606075c2429898f64-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-a-2597755324\" (UID: \"1c554e0899926e1606075c2429898f64\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2597755324" Mar 17 18:50:12.811052 kubelet[2353]: I0317 18:50:12.810998 2353 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22a24d2e211ba3d73aab186669c8ea6c-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-a-2597755324\" (UID: \"22a24d2e211ba3d73aab186669c8ea6c\") " pod="kube-system/kube-scheduler-ci-3510.3.7-a-2597755324" Mar 17 18:50:12.811052 kubelet[2353]: I0317 18:50:12.811016 2353 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c554e0899926e1606075c2429898f64-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-a-2597755324\" (UID: \"1c554e0899926e1606075c2429898f64\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2597755324" Mar 17 18:50:12.813225 kubelet[2353]: E0317 18:50:12.813195 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-2597755324?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="400ms" Mar 17 18:50:12.911423 kubelet[2353]: I0317 18:50:12.911380 2353 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c260ad68a77690ccc7f604dc21180535-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-a-2597755324\" (UID: \"c260ad68a77690ccc7f604dc21180535\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-2597755324" Mar 17 18:50:12.911610 kubelet[2353]: I0317 18:50:12.911597 2353 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c260ad68a77690ccc7f604dc21180535-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-a-2597755324\" (UID: \"c260ad68a77690ccc7f604dc21180535\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-2597755324" Mar 17 18:50:12.911917 kubelet[2353]: I0317 18:50:12.911765 2353 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c260ad68a77690ccc7f604dc21180535-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-a-2597755324\" (UID: \"c260ad68a77690ccc7f604dc21180535\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-2597755324" Mar 17 18:50:12.918087 kubelet[2353]: I0317 18:50:12.918030 2353 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-2597755324" Mar 17 18:50:12.918591 kubelet[2353]: E0317 18:50:12.918567 2353 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-3510.3.7-a-2597755324" Mar 17 18:50:13.113633 env[1558]: time="2025-03-17T18:50:13.113193506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-a-2597755324,Uid:1c554e0899926e1606075c2429898f64,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:13.115341 env[1558]: time="2025-03-17T18:50:13.115309563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-a-2597755324,Uid:22a24d2e211ba3d73aab186669c8ea6c,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:13.118612 env[1558]: time="2025-03-17T18:50:13.118581539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-a-2597755324,Uid:c260ad68a77690ccc7f604dc21180535,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:13.214686 kubelet[2353]: E0317 18:50:13.214641 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-2597755324?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="800ms" Mar 17 18:50:13.320842 kubelet[2353]: I0317 18:50:13.320672 2353 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-2597755324" Mar 17 18:50:13.321105 kubelet[2353]: E0317 18:50:13.321062 2353 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-3510.3.7-a-2597755324" Mar 17 18:50:13.627769 kubelet[2353]: W0317 18:50:13.627706 2353 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Mar 17 18:50:13.627769 kubelet[2353]: E0317 18:50:13.627769 2353 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Mar 17 18:50:13.896196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3987965203.mount: Deactivated successfully. Mar 17 18:50:13.959096 env[1558]: time="2025-03-17T18:50:13.959040927Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:13.963696 env[1558]: time="2025-03-17T18:50:13.963661001Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:13.984847 env[1558]: time="2025-03-17T18:50:13.984793859Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:13.987980 env[1558]: time="2025-03-17T18:50:13.987950567Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:13.992288 env[1558]: time="2025-03-17T18:50:13.992260794Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:14.000607 env[1558]: time="2025-03-17T18:50:14.000569800Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:14.009284 env[1558]: time="2025-03-17T18:50:14.009232550Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:14.015790 kubelet[2353]: E0317 18:50:14.015750 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-a-2597755324?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="1.6s" Mar 17 18:50:14.027920 env[1558]: time="2025-03-17T18:50:14.027881956Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:14.031768 env[1558]: time="2025-03-17T18:50:14.031734121Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:14.037755 env[1558]: time="2025-03-17T18:50:14.037705028Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:14.051149 env[1558]: time="2025-03-17T18:50:14.051107653Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:14.056065 env[1558]: time="2025-03-17T18:50:14.056026708Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:14.077630 kubelet[2353]: W0317 18:50:14.077557 2353 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Mar 17 18:50:14.077630 kubelet[2353]: E0317 18:50:14.077628 2353 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Mar 17 18:50:14.122631 kubelet[2353]: I0317 18:50:14.122594 2353 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-2597755324" Mar 17 18:50:14.122974 kubelet[2353]: E0317 18:50:14.122929 2353 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-3510.3.7-a-2597755324" Mar 17 18:50:14.139543 kubelet[2353]: W0317 18:50:14.139437 2353 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Mar 17 18:50:14.139543 kubelet[2353]: E0317 18:50:14.139501 2353 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Mar 17 18:50:14.140444 env[1558]: time="2025-03-17T18:50:14.139909539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:14.140444 env[1558]: time="2025-03-17T18:50:14.139945655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:14.140444 env[1558]: time="2025-03-17T18:50:14.139955294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:14.140444 env[1558]: time="2025-03-17T18:50:14.140117318Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b4610cb95464d7165e83bb5558a8a1b4a201078bda392dfb12e7d9de5a8b2bb pid=2393 runtime=io.containerd.runc.v2 Mar 17 18:50:14.161673 env[1558]: time="2025-03-17T18:50:14.160298646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:14.161673 env[1558]: time="2025-03-17T18:50:14.160336283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:14.161673 env[1558]: time="2025-03-17T18:50:14.160346762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:14.165261 kubelet[2353]: W0317 18:50:14.165180 2353 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-2597755324&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Mar 17 18:50:14.165261 kubelet[2353]: E0317 18:50:14.165239 2353 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-a-2597755324&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Mar 17 18:50:14.165679 env[1558]: time="2025-03-17T18:50:14.164119454Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8c7937081fcbafd6d747aa3d90d0334634045e845a3d20b257433fcbaf0bb74 pid=2417 runtime=io.containerd.runc.v2 Mar 17 18:50:14.184088 env[1558]: time="2025-03-17T18:50:14.183892425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:14.184088 env[1558]: time="2025-03-17T18:50:14.183947339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:14.184088 env[1558]: time="2025-03-17T18:50:14.183957538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:14.185241 env[1558]: time="2025-03-17T18:50:14.185148056Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3aeb048a6746198b6b5d1b67bf746d1a1bc2e782e43680c39fbbc408902d8797 pid=2453 runtime=io.containerd.runc.v2 Mar 17 18:50:14.219451 env[1558]: time="2025-03-17T18:50:14.219410580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-a-2597755324,Uid:c260ad68a77690ccc7f604dc21180535,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b4610cb95464d7165e83bb5558a8a1b4a201078bda392dfb12e7d9de5a8b2bb\"" Mar 17 18:50:14.223942 env[1558]: time="2025-03-17T18:50:14.223890960Z" level=info msg="CreateContainer within sandbox \"5b4610cb95464d7165e83bb5558a8a1b4a201078bda392dfb12e7d9de5a8b2bb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:50:14.239424 env[1558]: time="2025-03-17T18:50:14.239379571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-a-2597755324,Uid:22a24d2e211ba3d73aab186669c8ea6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8c7937081fcbafd6d747aa3d90d0334634045e845a3d20b257433fcbaf0bb74\"" Mar 17 18:50:14.242238 env[1558]: time="2025-03-17T18:50:14.242206440Z" level=info msg="CreateContainer within sandbox \"c8c7937081fcbafd6d747aa3d90d0334634045e845a3d20b257433fcbaf0bb74\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:50:14.254513 env[1558]: time="2025-03-17T18:50:14.254465662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-a-2597755324,Uid:1c554e0899926e1606075c2429898f64,Namespace:kube-system,Attempt:0,} returns sandbox id \"3aeb048a6746198b6b5d1b67bf746d1a1bc2e782e43680c39fbbc408902d8797\"" Mar 17 18:50:14.257243 env[1558]: time="2025-03-17T18:50:14.257214620Z" level=info msg="CreateContainer within sandbox \"3aeb048a6746198b6b5d1b67bf746d1a1bc2e782e43680c39fbbc408902d8797\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:50:14.355216 env[1558]: time="2025-03-17T18:50:14.355154409Z" level=info msg="CreateContainer within sandbox \"c8c7937081fcbafd6d747aa3d90d0334634045e845a3d20b257433fcbaf0bb74\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"087aaef1cdc704ee1a9fe95784838e8f7013d853ae9cd7cb1b61d9423ee83f26\"" Mar 17 18:50:14.355905 env[1558]: time="2025-03-17T18:50:14.355877975Z" level=info msg="StartContainer for \"087aaef1cdc704ee1a9fe95784838e8f7013d853ae9cd7cb1b61d9423ee83f26\"" Mar 17 18:50:14.360558 env[1558]: time="2025-03-17T18:50:14.360517618Z" level=info msg="CreateContainer within sandbox \"5b4610cb95464d7165e83bb5558a8a1b4a201078bda392dfb12e7d9de5a8b2bb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4b24e7ad4c9f736eab93d5252ac43873f7f636d364dcb0f164129f2d5a3e69cc\"" Mar 17 18:50:14.360949 env[1558]: time="2025-03-17T18:50:14.360928616Z" level=info msg="StartContainer for \"4b24e7ad4c9f736eab93d5252ac43873f7f636d364dcb0f164129f2d5a3e69cc\"" Mar 17 18:50:14.385864 env[1558]: time="2025-03-17T18:50:14.385171528Z" level=info msg="CreateContainer within sandbox \"3aeb048a6746198b6b5d1b67bf746d1a1bc2e782e43680c39fbbc408902d8797\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"02863351dab12c3789acb7bb779a92d4bcb75cccd6ae10b25e003501483eb278\"" Mar 17 18:50:14.389991 env[1558]: time="2025-03-17T18:50:14.389222552Z" level=info msg="StartContainer for \"02863351dab12c3789acb7bb779a92d4bcb75cccd6ae10b25e003501483eb278\"" Mar 17 18:50:14.438613 env[1558]: time="2025-03-17T18:50:14.438507134Z" level=info msg="StartContainer for \"087aaef1cdc704ee1a9fe95784838e8f7013d853ae9cd7cb1b61d9423ee83f26\" returns successfully" Mar 17 18:50:14.472348 env[1558]: time="2025-03-17T18:50:14.472290787Z" level=info msg="StartContainer for \"4b24e7ad4c9f736eab93d5252ac43873f7f636d364dcb0f164129f2d5a3e69cc\" returns successfully" Mar 17 18:50:14.517474 env[1558]: time="2025-03-17T18:50:14.517416116Z" level=info msg="StartContainer for \"02863351dab12c3789acb7bb779a92d4bcb75cccd6ae10b25e003501483eb278\" returns successfully" Mar 17 18:50:15.725034 kubelet[2353]: I0317 18:50:15.725006 2353 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-2597755324" Mar 17 18:50:16.387139 kubelet[2353]: E0317 18:50:16.387080 2353 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.7-a-2597755324\" not found" node="ci-3510.3.7-a-2597755324" Mar 17 18:50:16.462484 kubelet[2353]: I0317 18:50:16.462457 2353 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.7-a-2597755324" Mar 17 18:50:16.598495 kubelet[2353]: I0317 18:50:16.598466 2353 apiserver.go:52] "Watching apiserver" Mar 17 18:50:16.610994 kubelet[2353]: I0317 18:50:16.610964 2353 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:50:16.743220 kubelet[2353]: E0317 18:50:16.743154 2353 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.7-a-2597755324\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.7-a-2597755324" Mar 17 18:50:18.559105 systemd[1]: Reloading. Mar 17 18:50:18.614391 kubelet[2353]: W0317 18:50:18.613988 2353 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:50:18.654353 /usr/lib/systemd/system-generators/torcx-generator[2644]: time="2025-03-17T18:50:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:50:18.654388 /usr/lib/systemd/system-generators/torcx-generator[2644]: time="2025-03-17T18:50:18Z" level=info msg="torcx already run" Mar 17 18:50:18.741927 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:50:18.742086 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:50:18.760966 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:50:18.877183 kubelet[2353]: I0317 18:50:18.876653 2353 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:50:18.877069 systemd[1]: Stopping kubelet.service... Mar 17 18:50:18.895235 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:50:18.895541 systemd[1]: Stopped kubelet.service. Mar 17 18:50:18.898028 systemd[1]: Starting kubelet.service... Mar 17 18:50:19.042399 systemd[1]: Started kubelet.service. Mar 17 18:50:19.115462 kubelet[2719]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:50:19.115462 kubelet[2719]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:50:19.115462 kubelet[2719]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:50:19.115916 kubelet[2719]: I0317 18:50:19.115510 2719 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:50:19.120283 kubelet[2719]: I0317 18:50:19.120251 2719 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:50:19.120483 kubelet[2719]: I0317 18:50:19.120473 2719 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:50:19.120774 kubelet[2719]: I0317 18:50:19.120761 2719 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:50:19.122185 kubelet[2719]: I0317 18:50:19.122167 2719 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:50:19.124788 kubelet[2719]: I0317 18:50:19.124754 2719 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:50:19.135288 kubelet[2719]: I0317 18:50:19.134700 2719 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:50:19.135288 kubelet[2719]: I0317 18:50:19.135195 2719 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:50:19.135478 kubelet[2719]: I0317 18:50:19.135220 2719 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-a-2597755324","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:50:19.135478 kubelet[2719]: I0317 18:50:19.135383 2719 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:50:19.135478 kubelet[2719]: I0317 18:50:19.135404 2719 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:50:19.135478 kubelet[2719]: I0317 18:50:19.135470 2719 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:50:19.135615 kubelet[2719]: I0317 18:50:19.135570 2719 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:50:19.135615 kubelet[2719]: I0317 18:50:19.135583 2719 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:50:19.137871 kubelet[2719]: I0317 18:50:19.135609 2719 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:50:19.138021 kubelet[2719]: I0317 18:50:19.138008 2719 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:50:19.147902 kubelet[2719]: I0317 18:50:19.142687 2719 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:50:19.147902 kubelet[2719]: I0317 18:50:19.142893 2719 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:50:19.147902 kubelet[2719]: I0317 18:50:19.143336 2719 server.go:1264] "Started kubelet" Mar 17 18:50:19.147902 kubelet[2719]: I0317 18:50:19.145514 2719 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:50:19.155584 kubelet[2719]: I0317 18:50:19.155347 2719 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:50:19.156460 kubelet[2719]: I0317 18:50:19.156340 2719 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:50:19.158577 kubelet[2719]: I0317 18:50:19.158233 2719 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:50:19.167976 kubelet[2719]: I0317 18:50:19.167958 2719 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:50:19.168160 kubelet[2719]: I0317 18:50:19.168148 2719 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:50:19.176876 kubelet[2719]: I0317 18:50:19.176850 2719 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:50:19.183364 kubelet[2719]: I0317 18:50:19.183345 2719 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:50:19.186298 kubelet[2719]: I0317 18:50:19.186283 2719 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:50:19.188250 kubelet[2719]: E0317 18:50:19.183894 2719 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:50:19.188385 kubelet[2719]: I0317 18:50:19.183514 2719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:50:19.189926 kubelet[2719]: I0317 18:50:19.189906 2719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:50:19.192014 kubelet[2719]: I0317 18:50:19.191997 2719 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:50:19.192393 kubelet[2719]: I0317 18:50:19.192379 2719 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:50:19.193012 kubelet[2719]: E0317 18:50:19.192992 2719 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:50:19.245485 kubelet[2719]: I0317 18:50:19.245462 2719 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:50:19.245646 kubelet[2719]: I0317 18:50:19.245633 2719 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:50:19.245704 kubelet[2719]: I0317 18:50:19.245696 2719 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:50:19.245915 kubelet[2719]: I0317 18:50:19.245902 2719 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:50:19.246003 kubelet[2719]: I0317 18:50:19.245980 2719 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:50:19.246058 kubelet[2719]: I0317 18:50:19.246050 2719 policy_none.go:49] "None policy: Start" Mar 17 18:50:19.246651 kubelet[2719]: I0317 18:50:19.246637 2719 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:50:19.246747 kubelet[2719]: I0317 18:50:19.246738 2719 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:50:19.246943 kubelet[2719]: I0317 18:50:19.246931 2719 state_mem.go:75] "Updated machine memory state" Mar 17 18:50:19.248075 kubelet[2719]: I0317 18:50:19.248060 2719 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:50:19.293949 kubelet[2719]: E0317 18:50:19.293872 2719 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:50:19.366733 kubelet[2719]: I0317 18:50:19.366705 2719 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:50:19.367290 kubelet[2719]: I0317 18:50:19.367262 2719 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:50:19.367382 kubelet[2719]: I0317 18:50:19.367257 2719 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:50:19.367891 kubelet[2719]: I0317 18:50:19.367642 2719 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:50:19.476322 kubelet[2719]: I0317 18:50:19.473987 2719 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-a-2597755324" Mar 17 18:50:19.487540 kubelet[2719]: I0317 18:50:19.487415 2719 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.7-a-2597755324" Mar 17 18:50:19.487540 kubelet[2719]: I0317 18:50:19.487537 2719 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.7-a-2597755324" Mar 17 18:50:19.494575 kubelet[2719]: I0317 18:50:19.494545 2719 topology_manager.go:215] "Topology Admit Handler" podUID="22a24d2e211ba3d73aab186669c8ea6c" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.7-a-2597755324" Mar 17 18:50:19.494778 kubelet[2719]: I0317 18:50:19.494763 2719 topology_manager.go:215] "Topology Admit Handler" podUID="c260ad68a77690ccc7f604dc21180535" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.7-a-2597755324" Mar 17 18:50:19.494894 kubelet[2719]: I0317 18:50:19.494881 2719 topology_manager.go:215] "Topology Admit Handler" podUID="1c554e0899926e1606075c2429898f64" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.7-a-2597755324" Mar 17 18:50:19.507429 kubelet[2719]: W0317 18:50:19.507370 2719 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:50:19.507570 kubelet[2719]: E0317 18:50:19.507470 2719 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.7-a-2597755324\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.7-a-2597755324" Mar 17 18:50:19.507676 kubelet[2719]: W0317 18:50:19.507610 2719 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:50:19.507784 kubelet[2719]: W0317 18:50:19.507771 2719 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:50:19.609607 sudo[2752]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:50:19.609875 sudo[2752]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 18:50:19.668345 kubelet[2719]: I0317 18:50:19.668312 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22a24d2e211ba3d73aab186669c8ea6c-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-a-2597755324\" (UID: \"22a24d2e211ba3d73aab186669c8ea6c\") " pod="kube-system/kube-scheduler-ci-3510.3.7-a-2597755324" Mar 17 18:50:19.668593 kubelet[2719]: I0317 18:50:19.668561 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c260ad68a77690ccc7f604dc21180535-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-a-2597755324\" (UID: \"c260ad68a77690ccc7f604dc21180535\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-2597755324" Mar 17 18:50:19.668709 kubelet[2719]: I0317 18:50:19.668695 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c554e0899926e1606075c2429898f64-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-a-2597755324\" (UID: \"1c554e0899926e1606075c2429898f64\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2597755324" Mar 17 18:50:19.668808 kubelet[2719]: I0317 18:50:19.668796 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c554e0899926e1606075c2429898f64-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-a-2597755324\" (UID: \"1c554e0899926e1606075c2429898f64\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2597755324" Mar 17 18:50:19.668927 kubelet[2719]: I0317 18:50:19.668914 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c554e0899926e1606075c2429898f64-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-a-2597755324\" (UID: \"1c554e0899926e1606075c2429898f64\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2597755324" Mar 17 18:50:19.669036 kubelet[2719]: I0317 18:50:19.669023 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c554e0899926e1606075c2429898f64-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-a-2597755324\" (UID: \"1c554e0899926e1606075c2429898f64\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2597755324" Mar 17 18:50:19.669144 kubelet[2719]: I0317 18:50:19.669131 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c260ad68a77690ccc7f604dc21180535-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-a-2597755324\" (UID: \"c260ad68a77690ccc7f604dc21180535\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-2597755324" Mar 17 18:50:19.669249 kubelet[2719]: I0317 18:50:19.669237 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c260ad68a77690ccc7f604dc21180535-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-a-2597755324\" (UID: \"c260ad68a77690ccc7f604dc21180535\") " pod="kube-system/kube-apiserver-ci-3510.3.7-a-2597755324" Mar 17 18:50:19.669358 kubelet[2719]: I0317 18:50:19.669346 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1c554e0899926e1606075c2429898f64-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-a-2597755324\" (UID: \"1c554e0899926e1606075c2429898f64\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2597755324" Mar 17 18:50:20.085561 sudo[2752]: pam_unix(sudo:session): session closed for user root Mar 17 18:50:20.138719 kubelet[2719]: I0317 18:50:20.138678 2719 apiserver.go:52] "Watching apiserver" Mar 17 18:50:20.168613 kubelet[2719]: I0317 18:50:20.168573 2719 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:50:20.253835 kubelet[2719]: W0317 18:50:20.253783 2719 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:50:20.253956 kubelet[2719]: E0317 18:50:20.253862 2719 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.7-a-2597755324\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.7-a-2597755324" Mar 17 18:50:20.258849 kubelet[2719]: W0317 18:50:20.258812 2719 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:50:20.258961 kubelet[2719]: E0317 18:50:20.258871 2719 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.7-a-2597755324\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-a-2597755324" Mar 17 18:50:20.259462 kubelet[2719]: I0317 18:50:20.259416 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-a-2597755324" podStartSLOduration=2.259404936 podStartE2EDuration="2.259404936s" podCreationTimestamp="2025-03-17 18:50:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:50:20.258680921 +0000 UTC m=+1.204375505" watchObservedRunningTime="2025-03-17 18:50:20.259404936 +0000 UTC m=+1.205099520" Mar 17 18:50:20.281488 kubelet[2719]: I0317 18:50:20.281250 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-a-2597755324" podStartSLOduration=1.281234912 podStartE2EDuration="1.281234912s" podCreationTimestamp="2025-03-17 18:50:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:50:20.270752046 +0000 UTC m=+1.216446630" watchObservedRunningTime="2025-03-17 18:50:20.281234912 +0000 UTC m=+1.226929496" Mar 17 18:50:21.950347 sudo[1977]: pam_unix(sudo:session): session closed for user root Mar 17 18:50:22.026258 sshd[1973]: pam_unix(sshd:session): session closed for user core Mar 17 18:50:22.028875 systemd-logind[1542]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:50:22.029029 systemd[1]: sshd@4-10.200.20.37:22-10.200.16.10:46150.service: Deactivated successfully. Mar 17 18:50:22.029885 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:50:22.030345 systemd-logind[1542]: Removed session 7. Mar 17 18:50:27.394878 kubelet[2719]: I0317 18:50:27.394808 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-a-2597755324" podStartSLOduration=8.39477802 podStartE2EDuration="8.39477802s" podCreationTimestamp="2025-03-17 18:50:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:50:20.281757545 +0000 UTC m=+1.227452129" watchObservedRunningTime="2025-03-17 18:50:27.39477802 +0000 UTC m=+8.340472604" Mar 17 18:50:34.659088 kubelet[2719]: I0317 18:50:34.659063 2719 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:50:34.660004 env[1558]: time="2025-03-17T18:50:34.659893131Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:50:34.660274 kubelet[2719]: I0317 18:50:34.660109 2719 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:50:35.549797 kubelet[2719]: I0317 18:50:35.549750 2719 topology_manager.go:215] "Topology Admit Handler" podUID="fafe99a9-efe0-46ec-9563-915e323e6863" podNamespace="kube-system" podName="kube-proxy-z42hm" Mar 17 18:50:35.555881 kubelet[2719]: W0317 18:50:35.555846 2719 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.7-a-2597755324" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-a-2597755324' and this object Mar 17 18:50:35.555881 kubelet[2719]: E0317 18:50:35.555884 2719 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.7-a-2597755324" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-a-2597755324' and this object Mar 17 18:50:35.556094 kubelet[2719]: W0317 18:50:35.555858 2719 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.7-a-2597755324" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-a-2597755324' and this object Mar 17 18:50:35.556094 kubelet[2719]: E0317 18:50:35.555905 2719 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.7-a-2597755324" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-a-2597755324' and this object Mar 17 18:50:35.564246 kubelet[2719]: I0317 18:50:35.564205 2719 topology_manager.go:215] "Topology Admit Handler" podUID="ada4742e-e57f-46ea-bfe3-7e743ca4b565" podNamespace="kube-system" podName="cilium-n5vxp" Mar 17 18:50:35.617085 kubelet[2719]: I0317 18:50:35.617046 2719 topology_manager.go:215] "Topology Admit Handler" podUID="5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51" podNamespace="kube-system" podName="cilium-operator-599987898-c9pxv" Mar 17 18:50:35.646334 kubelet[2719]: I0317 18:50:35.646284 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fafe99a9-efe0-46ec-9563-915e323e6863-lib-modules\") pod \"kube-proxy-z42hm\" (UID: \"fafe99a9-efe0-46ec-9563-915e323e6863\") " pod="kube-system/kube-proxy-z42hm" Mar 17 18:50:35.646484 kubelet[2719]: I0317 18:50:35.646364 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-lib-modules\") pod \"cilium-n5vxp\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " pod="kube-system/cilium-n5vxp" Mar 17 18:50:35.646484 kubelet[2719]: I0317 18:50:35.646411 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ada4742e-e57f-46ea-bfe3-7e743ca4b565-clustermesh-secrets\") pod \"cilium-n5vxp\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " pod="kube-system/cilium-n5vxp" Mar 17 18:50:35.646484 kubelet[2719]: I0317 18:50:35.646431 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ada4742e-e57f-46ea-bfe3-7e743ca4b565-cilium-config-path\") pod \"cilium-n5vxp\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " pod="kube-system/cilium-n5vxp" Mar 17 18:50:35.646484 kubelet[2719]: I0317 18:50:35.646448 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frzzv\" (UniqueName: \"kubernetes.io/projected/fafe99a9-efe0-46ec-9563-915e323e6863-kube-api-access-frzzv\") pod \"kube-proxy-z42hm\" (UID: \"fafe99a9-efe0-46ec-9563-915e323e6863\") " pod="kube-system/kube-proxy-z42hm" Mar 17 18:50:35.646594 kubelet[2719]: I0317 18:50:35.646512 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-bpf-maps\") pod \"cilium-n5vxp\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " pod="kube-system/cilium-n5vxp" Mar 17 18:50:35.646594 kubelet[2719]: I0317 18:50:35.646531 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fafe99a9-efe0-46ec-9563-915e323e6863-kube-proxy\") pod \"kube-proxy-z42hm\" (UID: \"fafe99a9-efe0-46ec-9563-915e323e6863\") " pod="kube-system/kube-proxy-z42hm" Mar 17 18:50:35.646594 kubelet[2719]: I0317 18:50:35.646573 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51-cilium-config-path\") pod \"cilium-operator-599987898-c9pxv\" (UID: \"5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51\") " pod="kube-system/cilium-operator-599987898-c9pxv" Mar 17 18:50:35.646594 kubelet[2719]: I0317 18:50:35.646591 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-cilium-cgroup\") pod \"cilium-n5vxp\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " pod="kube-system/cilium-n5vxp" Mar 17 18:50:35.646686 kubelet[2719]: I0317 18:50:35.646607 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-598hw\" (UniqueName: \"kubernetes.io/projected/5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51-kube-api-access-598hw\") pod \"cilium-operator-599987898-c9pxv\" (UID: \"5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51\") " pod="kube-system/cilium-operator-599987898-c9pxv" Mar 17 18:50:35.646686 kubelet[2719]: I0317 18:50:35.646658 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fafe99a9-efe0-46ec-9563-915e323e6863-xtables-lock\") pod \"kube-proxy-z42hm\" (UID: \"fafe99a9-efe0-46ec-9563-915e323e6863\") " pod="kube-system/kube-proxy-z42hm" Mar 17 18:50:35.646686 kubelet[2719]: I0317 18:50:35.646683 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-cni-path\") pod \"cilium-n5vxp\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " pod="kube-system/cilium-n5vxp" Mar 17 18:50:35.646754 kubelet[2719]: I0317 18:50:35.646720 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-etc-cni-netd\") pod \"cilium-n5vxp\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " pod="kube-system/cilium-n5vxp" Mar 17 18:50:35.646754 kubelet[2719]: I0317 18:50:35.646736 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-host-proc-sys-net\") pod \"cilium-n5vxp\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " pod="kube-system/cilium-n5vxp" Mar 17 18:50:35.646801 kubelet[2719]: I0317 18:50:35.646752 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-cilium-run\") pod \"cilium-n5vxp\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " pod="kube-system/cilium-n5vxp" Mar 17 18:50:35.646801 kubelet[2719]: I0317 18:50:35.646796 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4j5l\" (UniqueName: \"kubernetes.io/projected/ada4742e-e57f-46ea-bfe3-7e743ca4b565-kube-api-access-x4j5l\") pod \"cilium-n5vxp\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " pod="kube-system/cilium-n5vxp" Mar 17 18:50:35.646869 kubelet[2719]: I0317 18:50:35.646812 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-xtables-lock\") pod \"cilium-n5vxp\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " pod="kube-system/cilium-n5vxp" Mar 17 18:50:35.646869 kubelet[2719]: I0317 18:50:35.646858 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-host-proc-sys-kernel\") pod \"cilium-n5vxp\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " pod="kube-system/cilium-n5vxp" Mar 17 18:50:35.646918 kubelet[2719]: I0317 18:50:35.646874 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ada4742e-e57f-46ea-bfe3-7e743ca4b565-hubble-tls\") pod \"cilium-n5vxp\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " pod="kube-system/cilium-n5vxp" Mar 17 18:50:35.646945 kubelet[2719]: I0317 18:50:35.646890 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-hostproc\") pod \"cilium-n5vxp\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " pod="kube-system/cilium-n5vxp" Mar 17 18:50:36.770043 kubelet[2719]: E0317 18:50:36.770003 2719 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:50:36.770043 kubelet[2719]: E0317 18:50:36.770040 2719 projected.go:200] Error preparing data for projected volume kube-api-access-x4j5l for pod kube-system/cilium-n5vxp: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:50:36.770433 kubelet[2719]: E0317 18:50:36.770123 2719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ada4742e-e57f-46ea-bfe3-7e743ca4b565-kube-api-access-x4j5l podName:ada4742e-e57f-46ea-bfe3-7e743ca4b565 nodeName:}" failed. No retries permitted until 2025-03-17 18:50:37.270100938 +0000 UTC m=+18.215795482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x4j5l" (UniqueName: "kubernetes.io/projected/ada4742e-e57f-46ea-bfe3-7e743ca4b565-kube-api-access-x4j5l") pod "cilium-n5vxp" (UID: "ada4742e-e57f-46ea-bfe3-7e743ca4b565") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:50:36.770433 kubelet[2719]: E0317 18:50:36.770284 2719 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:50:36.770433 kubelet[2719]: E0317 18:50:36.770299 2719 projected.go:200] Error preparing data for projected volume kube-api-access-frzzv for pod kube-system/kube-proxy-z42hm: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:50:36.770433 kubelet[2719]: E0317 18:50:36.770327 2719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fafe99a9-efe0-46ec-9563-915e323e6863-kube-api-access-frzzv podName:fafe99a9-efe0-46ec-9563-915e323e6863 nodeName:}" failed. No retries permitted until 2025-03-17 18:50:37.270319205 +0000 UTC m=+18.216013789 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-frzzv" (UniqueName: "kubernetes.io/projected/fafe99a9-efe0-46ec-9563-915e323e6863-kube-api-access-frzzv") pod "kube-proxy-z42hm" (UID: "fafe99a9-efe0-46ec-9563-915e323e6863") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:50:36.773546 kubelet[2719]: E0317 18:50:36.773516 2719 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:50:36.773649 kubelet[2719]: E0317 18:50:36.773551 2719 projected.go:200] Error preparing data for projected volume kube-api-access-598hw for pod kube-system/cilium-operator-599987898-c9pxv: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:50:36.773649 kubelet[2719]: E0317 18:50:36.773608 2719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51-kube-api-access-598hw podName:5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51 nodeName:}" failed. No retries permitted until 2025-03-17 18:50:37.273590077 +0000 UTC m=+18.219284661 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-598hw" (UniqueName: "kubernetes.io/projected/5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51-kube-api-access-598hw") pod "cilium-operator-599987898-c9pxv" (UID: "5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:50:37.374173 env[1558]: time="2025-03-17T18:50:37.374117038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n5vxp,Uid:ada4742e-e57f-46ea-bfe3-7e743ca4b565,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:37.423762 env[1558]: time="2025-03-17T18:50:37.423709796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-c9pxv,Uid:5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:37.438134 env[1558]: time="2025-03-17T18:50:37.429616869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:37.438134 env[1558]: time="2025-03-17T18:50:37.429880772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:37.438134 env[1558]: time="2025-03-17T18:50:37.429894172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:37.438134 env[1558]: time="2025-03-17T18:50:37.430019364Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e391217968bb089b703da88220a8a586079b00db70066d7324cbfa6180aa57c pid=2806 runtime=io.containerd.runc.v2 Mar 17 18:50:37.482860 env[1558]: time="2025-03-17T18:50:37.482768686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:37.483023 env[1558]: time="2025-03-17T18:50:37.482869519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:37.483023 env[1558]: time="2025-03-17T18:50:37.482897038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:37.483358 env[1558]: time="2025-03-17T18:50:37.483073867Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/646fc2a448ff6a675d130b9dd167c52020054a00284e3ae4c4a60c063ea3c795 pid=2832 runtime=io.containerd.runc.v2 Mar 17 18:50:37.511546 env[1558]: time="2025-03-17T18:50:37.511454783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n5vxp,Uid:ada4742e-e57f-46ea-bfe3-7e743ca4b565,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e391217968bb089b703da88220a8a586079b00db70066d7324cbfa6180aa57c\"" Mar 17 18:50:37.513231 env[1558]: time="2025-03-17T18:50:37.513181715Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:50:37.535438 env[1558]: time="2025-03-17T18:50:37.535388095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-c9pxv,Uid:5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51,Namespace:kube-system,Attempt:0,} returns sandbox id \"646fc2a448ff6a675d130b9dd167c52020054a00284e3ae4c4a60c063ea3c795\"" Mar 17 18:50:37.653197 env[1558]: time="2025-03-17T18:50:37.653096220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z42hm,Uid:fafe99a9-efe0-46ec-9563-915e323e6863,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:37.711925 env[1558]: time="2025-03-17T18:50:37.711809771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:37.711925 env[1558]: time="2025-03-17T18:50:37.711882286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:37.711925 env[1558]: time="2025-03-17T18:50:37.711893206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:37.712333 env[1558]: time="2025-03-17T18:50:37.712296261Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c75f5f71c94792405c36d79f1ef82492bdccdf09fd662f5a981d363612ac4d4f pid=2888 runtime=io.containerd.runc.v2 Mar 17 18:50:37.746023 env[1558]: time="2025-03-17T18:50:37.745954569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z42hm,Uid:fafe99a9-efe0-46ec-9563-915e323e6863,Namespace:kube-system,Attempt:0,} returns sandbox id \"c75f5f71c94792405c36d79f1ef82492bdccdf09fd662f5a981d363612ac4d4f\"" Mar 17 18:50:37.749205 env[1558]: time="2025-03-17T18:50:37.749163769Z" level=info msg="CreateContainer within sandbox \"c75f5f71c94792405c36d79f1ef82492bdccdf09fd662f5a981d363612ac4d4f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:50:37.796424 env[1558]: time="2025-03-17T18:50:37.796377475Z" level=info msg="CreateContainer within sandbox \"c75f5f71c94792405c36d79f1ef82492bdccdf09fd662f5a981d363612ac4d4f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9dfe130a39af39f2d23b17884436530e3699cd6fc6c18d862bd8bec48734c1eb\"" Mar 17 18:50:37.798592 env[1558]: time="2025-03-17T18:50:37.797485286Z" level=info msg="StartContainer for \"9dfe130a39af39f2d23b17884436530e3699cd6fc6c18d862bd8bec48734c1eb\"" Mar 17 18:50:37.846528 env[1558]: time="2025-03-17T18:50:37.846464602Z" level=info msg="StartContainer for \"9dfe130a39af39f2d23b17884436530e3699cd6fc6c18d862bd8bec48734c1eb\" returns successfully" Mar 17 18:50:39.209507 kubelet[2719]: I0317 18:50:39.209445 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z42hm" podStartSLOduration=4.209425234 podStartE2EDuration="4.209425234s" podCreationTimestamp="2025-03-17 18:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:50:38.262712321 +0000 UTC m=+19.208406905" watchObservedRunningTime="2025-03-17 18:50:39.209425234 +0000 UTC m=+20.155119818" Mar 17 18:50:44.447625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1667108733.mount: Deactivated successfully. Mar 17 18:50:47.022384 env[1558]: time="2025-03-17T18:50:47.022342175Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:47.045215 env[1558]: time="2025-03-17T18:50:47.045178430Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:47.054578 env[1558]: time="2025-03-17T18:50:47.054540264Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:47.055477 env[1558]: time="2025-03-17T18:50:47.055448577Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 18:50:47.057525 env[1558]: time="2025-03-17T18:50:47.057298281Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:50:47.060078 env[1558]: time="2025-03-17T18:50:47.060050298Z" level=info msg="CreateContainer within sandbox \"0e391217968bb089b703da88220a8a586079b00db70066d7324cbfa6180aa57c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:50:47.097410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount736389407.mount: Deactivated successfully. Mar 17 18:50:47.104027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount110434332.mount: Deactivated successfully. Mar 17 18:50:47.135918 env[1558]: time="2025-03-17T18:50:47.135868602Z" level=info msg="CreateContainer within sandbox \"0e391217968bb089b703da88220a8a586079b00db70066d7324cbfa6180aa57c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"62262575ec5d2f9b8aacf1fa72bb49ce9b898389b43514d09c0113b8bcdc6ca1\"" Mar 17 18:50:47.137024 env[1558]: time="2025-03-17T18:50:47.136996264Z" level=info msg="StartContainer for \"62262575ec5d2f9b8aacf1fa72bb49ce9b898389b43514d09c0113b8bcdc6ca1\"" Mar 17 18:50:47.196887 env[1558]: time="2025-03-17T18:50:47.196847437Z" level=info msg="StartContainer for \"62262575ec5d2f9b8aacf1fa72bb49ce9b898389b43514d09c0113b8bcdc6ca1\" returns successfully" Mar 17 18:50:48.095103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62262575ec5d2f9b8aacf1fa72bb49ce9b898389b43514d09c0113b8bcdc6ca1-rootfs.mount: Deactivated successfully. Mar 17 18:50:48.337534 env[1558]: time="2025-03-17T18:50:48.337485952Z" level=info msg="shim disconnected" id=62262575ec5d2f9b8aacf1fa72bb49ce9b898389b43514d09c0113b8bcdc6ca1 Mar 17 18:50:48.337894 env[1558]: time="2025-03-17T18:50:48.337557829Z" level=warning msg="cleaning up after shim disconnected" id=62262575ec5d2f9b8aacf1fa72bb49ce9b898389b43514d09c0113b8bcdc6ca1 namespace=k8s.io Mar 17 18:50:48.337894 env[1558]: time="2025-03-17T18:50:48.337570308Z" level=info msg="cleaning up dead shim" Mar 17 18:50:48.347275 env[1558]: time="2025-03-17T18:50:48.346930950Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:50:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3128 runtime=io.containerd.runc.v2\n" Mar 17 18:50:49.274388 env[1558]: time="2025-03-17T18:50:49.274333188Z" level=info msg="CreateContainer within sandbox \"0e391217968bb089b703da88220a8a586079b00db70066d7324cbfa6180aa57c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:50:49.309206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3658054994.mount: Deactivated successfully. Mar 17 18:50:49.314400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1393244810.mount: Deactivated successfully. Mar 17 18:50:49.337132 env[1558]: time="2025-03-17T18:50:49.337066637Z" level=info msg="CreateContainer within sandbox \"0e391217968bb089b703da88220a8a586079b00db70066d7324cbfa6180aa57c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d8fab36f8d6b9a2679fd50e921c2640186beea54c6adf607d028c0805d566360\"" Mar 17 18:50:49.338386 env[1558]: time="2025-03-17T18:50:49.338034908Z" level=info msg="StartContainer for \"d8fab36f8d6b9a2679fd50e921c2640186beea54c6adf607d028c0805d566360\"" Mar 17 18:50:49.399123 env[1558]: time="2025-03-17T18:50:49.399055924Z" level=info msg="StartContainer for \"d8fab36f8d6b9a2679fd50e921c2640186beea54c6adf607d028c0805d566360\" returns successfully" Mar 17 18:50:49.399726 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:50:49.401551 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:50:49.401711 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:50:49.403728 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:50:49.415707 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:50:49.451903 env[1558]: time="2025-03-17T18:50:49.451796515Z" level=info msg="shim disconnected" id=d8fab36f8d6b9a2679fd50e921c2640186beea54c6adf607d028c0805d566360 Mar 17 18:50:49.451903 env[1558]: time="2025-03-17T18:50:49.451856472Z" level=warning msg="cleaning up after shim disconnected" id=d8fab36f8d6b9a2679fd50e921c2640186beea54c6adf607d028c0805d566360 namespace=k8s.io Mar 17 18:50:49.451903 env[1558]: time="2025-03-17T18:50:49.451865671Z" level=info msg="cleaning up dead shim" Mar 17 18:50:49.459697 env[1558]: time="2025-03-17T18:50:49.459647720Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:50:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3192 runtime=io.containerd.runc.v2\n" Mar 17 18:50:50.277057 env[1558]: time="2025-03-17T18:50:50.276687465Z" level=info msg="CreateContainer within sandbox \"0e391217968bb089b703da88220a8a586079b00db70066d7324cbfa6180aa57c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:50:50.303946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8fab36f8d6b9a2679fd50e921c2640186beea54c6adf607d028c0805d566360-rootfs.mount: Deactivated successfully. Mar 17 18:50:50.325327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount784194174.mount: Deactivated successfully. Mar 17 18:50:50.333004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1082000186.mount: Deactivated successfully. Mar 17 18:50:50.364588 env[1558]: time="2025-03-17T18:50:50.364532244Z" level=info msg="CreateContainer within sandbox \"0e391217968bb089b703da88220a8a586079b00db70066d7324cbfa6180aa57c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b267f430d99b2c6de1529a0e6c622bc36b8fac069cc9a2b36a9c60b47c20ca5f\"" Mar 17 18:50:50.366917 env[1558]: time="2025-03-17T18:50:50.366885647Z" level=info msg="StartContainer for \"b267f430d99b2c6de1529a0e6c622bc36b8fac069cc9a2b36a9c60b47c20ca5f\"" Mar 17 18:50:50.442203 env[1558]: time="2025-03-17T18:50:50.442133608Z" level=info msg="StartContainer for \"b267f430d99b2c6de1529a0e6c622bc36b8fac069cc9a2b36a9c60b47c20ca5f\" returns successfully" Mar 17 18:50:50.584829 env[1558]: time="2025-03-17T18:50:50.584364979Z" level=info msg="shim disconnected" id=b267f430d99b2c6de1529a0e6c622bc36b8fac069cc9a2b36a9c60b47c20ca5f Mar 17 18:50:50.584829 env[1558]: time="2025-03-17T18:50:50.584411417Z" level=warning msg="cleaning up after shim disconnected" id=b267f430d99b2c6de1529a0e6c622bc36b8fac069cc9a2b36a9c60b47c20ca5f namespace=k8s.io Mar 17 18:50:50.584829 env[1558]: time="2025-03-17T18:50:50.584423376Z" level=info msg="cleaning up dead shim" Mar 17 18:50:50.614474 env[1558]: time="2025-03-17T18:50:50.614438773Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:50:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3252 runtime=io.containerd.runc.v2\n" Mar 17 18:50:50.874587 env[1558]: time="2025-03-17T18:50:50.874211254Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:50.884633 env[1558]: time="2025-03-17T18:50:50.884580621Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:50.889252 env[1558]: time="2025-03-17T18:50:50.889216952Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:50:50.889738 env[1558]: time="2025-03-17T18:50:50.889709528Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 18:50:50.893911 env[1558]: time="2025-03-17T18:50:50.893870242Z" level=info msg="CreateContainer within sandbox \"646fc2a448ff6a675d130b9dd167c52020054a00284e3ae4c4a60c063ea3c795\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:50:50.932773 env[1558]: time="2025-03-17T18:50:50.932722522Z" level=info msg="CreateContainer within sandbox \"646fc2a448ff6a675d130b9dd167c52020054a00284e3ae4c4a60c063ea3c795\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"64d1ddce60d70999298d378c301a26adcb5a5a9d3c728801a1daca4d2d30ca4e\"" Mar 17 18:50:50.933495 env[1558]: time="2025-03-17T18:50:50.933466965Z" level=info msg="StartContainer for \"64d1ddce60d70999298d378c301a26adcb5a5a9d3c728801a1daca4d2d30ca4e\"" Mar 17 18:50:50.984290 env[1558]: time="2025-03-17T18:50:50.984229937Z" level=info msg="StartContainer for \"64d1ddce60d70999298d378c301a26adcb5a5a9d3c728801a1daca4d2d30ca4e\" returns successfully" Mar 17 18:50:51.280767 env[1558]: time="2025-03-17T18:50:51.280720821Z" level=info msg="CreateContainer within sandbox \"0e391217968bb089b703da88220a8a586079b00db70066d7324cbfa6180aa57c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:50:51.292575 kubelet[2719]: I0317 18:50:51.292514 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-c9pxv" podStartSLOduration=2.938364546 podStartE2EDuration="16.292498088s" podCreationTimestamp="2025-03-17 18:50:35 +0000 UTC" firstStartedPulling="2025-03-17 18:50:37.536955998 +0000 UTC m=+18.482650542" lastFinishedPulling="2025-03-17 18:50:50.8910895 +0000 UTC m=+31.836784084" observedRunningTime="2025-03-17 18:50:51.291481218 +0000 UTC m=+32.237175802" watchObservedRunningTime="2025-03-17 18:50:51.292498088 +0000 UTC m=+32.238192632" Mar 17 18:50:51.332043 env[1558]: time="2025-03-17T18:50:51.331999687Z" level=info msg="CreateContainer within sandbox \"0e391217968bb089b703da88220a8a586079b00db70066d7324cbfa6180aa57c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"55fcce3a859437d15a009807c98bc8be724a5d8b5ac1e33db2e4f3a5cf7dbbe9\"" Mar 17 18:50:51.332650 env[1558]: time="2025-03-17T18:50:51.332597618Z" level=info msg="StartContainer for \"55fcce3a859437d15a009807c98bc8be724a5d8b5ac1e33db2e4f3a5cf7dbbe9\"" Mar 17 18:50:51.443348 env[1558]: time="2025-03-17T18:50:51.443306152Z" level=info msg="StartContainer for \"55fcce3a859437d15a009807c98bc8be724a5d8b5ac1e33db2e4f3a5cf7dbbe9\" returns successfully" Mar 17 18:50:51.643940 env[1558]: time="2025-03-17T18:50:51.643839117Z" level=info msg="shim disconnected" id=55fcce3a859437d15a009807c98bc8be724a5d8b5ac1e33db2e4f3a5cf7dbbe9 Mar 17 18:50:51.644213 env[1558]: time="2025-03-17T18:50:51.644194220Z" level=warning msg="cleaning up after shim disconnected" id=55fcce3a859437d15a009807c98bc8be724a5d8b5ac1e33db2e4f3a5cf7dbbe9 namespace=k8s.io Mar 17 18:50:51.644319 env[1558]: time="2025-03-17T18:50:51.644301935Z" level=info msg="cleaning up dead shim" Mar 17 18:50:51.651555 env[1558]: time="2025-03-17T18:50:51.651517784Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:50:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3345 runtime=io.containerd.runc.v2\n" Mar 17 18:50:52.286477 env[1558]: time="2025-03-17T18:50:52.286400236Z" level=info msg="CreateContainer within sandbox \"0e391217968bb089b703da88220a8a586079b00db70066d7324cbfa6180aa57c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:50:52.304018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55fcce3a859437d15a009807c98bc8be724a5d8b5ac1e33db2e4f3a5cf7dbbe9-rootfs.mount: Deactivated successfully. Mar 17 18:50:52.327282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount479076049.mount: Deactivated successfully. Mar 17 18:50:52.335029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount975086255.mount: Deactivated successfully. Mar 17 18:50:52.362225 env[1558]: time="2025-03-17T18:50:52.362173167Z" level=info msg="CreateContainer within sandbox \"0e391217968bb089b703da88220a8a586079b00db70066d7324cbfa6180aa57c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc\"" Mar 17 18:50:52.364334 env[1558]: time="2025-03-17T18:50:52.364291746Z" level=info msg="StartContainer for \"999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc\"" Mar 17 18:50:52.414456 env[1558]: time="2025-03-17T18:50:52.414417425Z" level=info msg="StartContainer for \"999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc\" returns successfully" Mar 17 18:50:52.523849 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:50:52.567697 kubelet[2719]: I0317 18:50:52.566343 2719 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 18:50:52.599994 kubelet[2719]: I0317 18:50:52.599950 2719 topology_manager.go:215] "Topology Admit Handler" podUID="89f12ad4-f515-4a09-907a-3e0aa3149f8d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-d79vc" Mar 17 18:50:52.605006 kubelet[2719]: I0317 18:50:52.604964 2719 topology_manager.go:215] "Topology Admit Handler" podUID="c2310f5d-5529-478c-b5da-a45ecf69b2d0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-f7mr5" Mar 17 18:50:52.661877 kubelet[2719]: I0317 18:50:52.661844 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89f12ad4-f515-4a09-907a-3e0aa3149f8d-config-volume\") pod \"coredns-7db6d8ff4d-d79vc\" (UID: \"89f12ad4-f515-4a09-907a-3e0aa3149f8d\") " pod="kube-system/coredns-7db6d8ff4d-d79vc" Mar 17 18:50:52.662074 kubelet[2719]: I0317 18:50:52.662057 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbkvd\" (UniqueName: \"kubernetes.io/projected/c2310f5d-5529-478c-b5da-a45ecf69b2d0-kube-api-access-bbkvd\") pod \"coredns-7db6d8ff4d-f7mr5\" (UID: \"c2310f5d-5529-478c-b5da-a45ecf69b2d0\") " pod="kube-system/coredns-7db6d8ff4d-f7mr5" Mar 17 18:50:52.662173 kubelet[2719]: I0317 18:50:52.662161 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vftzg\" (UniqueName: \"kubernetes.io/projected/89f12ad4-f515-4a09-907a-3e0aa3149f8d-kube-api-access-vftzg\") pod \"coredns-7db6d8ff4d-d79vc\" (UID: \"89f12ad4-f515-4a09-907a-3e0aa3149f8d\") " pod="kube-system/coredns-7db6d8ff4d-d79vc" Mar 17 18:50:52.662275 kubelet[2719]: I0317 18:50:52.662262 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2310f5d-5529-478c-b5da-a45ecf69b2d0-config-volume\") pod \"coredns-7db6d8ff4d-f7mr5\" (UID: \"c2310f5d-5529-478c-b5da-a45ecf69b2d0\") " pod="kube-system/coredns-7db6d8ff4d-f7mr5" Mar 17 18:50:52.904765 env[1558]: time="2025-03-17T18:50:52.904641708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d79vc,Uid:89f12ad4-f515-4a09-907a-3e0aa3149f8d,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:52.908286 env[1558]: time="2025-03-17T18:50:52.908238056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f7mr5,Uid:c2310f5d-5529-478c-b5da-a45ecf69b2d0,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:52.998855 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:50:53.311921 kubelet[2719]: I0317 18:50:53.311848 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n5vxp" podStartSLOduration=8.767704794 podStartE2EDuration="18.311829755s" podCreationTimestamp="2025-03-17 18:50:35 +0000 UTC" firstStartedPulling="2025-03-17 18:50:37.512772061 +0000 UTC m=+18.458466645" lastFinishedPulling="2025-03-17 18:50:47.056897022 +0000 UTC m=+28.002591606" observedRunningTime="2025-03-17 18:50:53.310277588 +0000 UTC m=+34.255972212" watchObservedRunningTime="2025-03-17 18:50:53.311829755 +0000 UTC m=+34.257524339" Mar 17 18:50:54.337993 waagent[1796]: 2025-03-17T18:50:54.337896Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Mar 17 18:50:54.348635 waagent[1796]: 2025-03-17T18:50:54.348572Z INFO ExtHandler Mar 17 18:50:54.349001 waagent[1796]: 2025-03-17T18:50:54.348950Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1dcc8a0b-d4ad-40b9-a369-09afa1390b4b eTag: 9840355415196816354 source: Fabric] Mar 17 18:50:54.349922 waagent[1796]: 2025-03-17T18:50:54.349866Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 17 18:50:54.351326 waagent[1796]: 2025-03-17T18:50:54.351268Z INFO ExtHandler Mar 17 18:50:54.351559 waagent[1796]: 2025-03-17T18:50:54.351511Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Mar 17 18:50:54.427005 waagent[1796]: 2025-03-17T18:50:54.426928Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 17 18:50:54.522791 waagent[1796]: 2025-03-17T18:50:54.522649Z INFO ExtHandler Downloaded certificate {'thumbprint': '5A2EFE6206636ACFA4F3F71D3B774842D91B5181', 'hasPrivateKey': False} Mar 17 18:50:54.524201 waagent[1796]: 2025-03-17T18:50:54.524137Z INFO ExtHandler Downloaded certificate {'thumbprint': '4B4C2F9F762367CFC53E29AA17BBDCE1C67FEE07', 'hasPrivateKey': True} Mar 17 18:50:54.525441 waagent[1796]: 2025-03-17T18:50:54.525381Z INFO ExtHandler Fetch goal state completed Mar 17 18:50:54.526634 waagent[1796]: 2025-03-17T18:50:54.526576Z INFO ExtHandler ExtHandler VM enabled for RSM updates, switching to RSM update mode Mar 17 18:50:54.528330 waagent[1796]: 2025-03-17T18:50:54.528273Z INFO ExtHandler ExtHandler Mar 17 18:50:54.528577 waagent[1796]: 2025-03-17T18:50:54.528524Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: c75b1c4d-33f1-4afa-b600-c2b579c7d16c correlation cf49a652-ac9b-48ec-afe7-f718342f228f created: 2025-03-17T18:50:46.223373Z] Mar 17 18:50:54.529510 waagent[1796]: 2025-03-17T18:50:54.529451Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 17 18:50:54.531684 waagent[1796]: 2025-03-17T18:50:54.531628Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 3 ms] Mar 17 18:50:54.667862 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:50:54.668683 systemd-networkd[1729]: cilium_host: Link UP Mar 17 18:50:54.669309 systemd-networkd[1729]: cilium_net: Link UP Mar 17 18:50:54.669323 systemd-networkd[1729]: cilium_net: Gained carrier Mar 17 18:50:54.669880 systemd-networkd[1729]: cilium_host: Gained carrier Mar 17 18:50:54.678611 systemd-networkd[1729]: cilium_host: Gained IPv6LL Mar 17 18:50:54.811872 systemd-networkd[1729]: cilium_vxlan: Link UP Mar 17 18:50:54.811879 systemd-networkd[1729]: cilium_vxlan: Gained carrier Mar 17 18:50:55.077860 kernel: NET: Registered PF_ALG protocol family Mar 17 18:50:55.322056 systemd-networkd[1729]: cilium_net: Gained IPv6LL Mar 17 18:50:55.761628 systemd-networkd[1729]: lxc_health: Link UP Mar 17 18:50:55.804898 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:50:55.804808 systemd-networkd[1729]: lxc_health: Gained carrier Mar 17 18:50:55.893017 systemd-networkd[1729]: cilium_vxlan: Gained IPv6LL Mar 17 18:50:55.982549 systemd-networkd[1729]: lxc6cc34445ee26: Link UP Mar 17 18:50:55.997525 kernel: eth0: renamed from tmp00274 Mar 17 18:50:56.014934 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6cc34445ee26: link becomes ready Mar 17 18:50:56.013479 systemd-networkd[1729]: lxc6cc34445ee26: Gained carrier Mar 17 18:50:56.034972 systemd-networkd[1729]: lxcb08b80642101: Link UP Mar 17 18:50:56.043842 kernel: eth0: renamed from tmpe3135 Mar 17 18:50:56.056277 systemd-networkd[1729]: lxcb08b80642101: Gained carrier Mar 17 18:50:56.056876 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb08b80642101: link becomes ready Mar 17 18:50:57.301005 systemd-networkd[1729]: lxc_health: Gained IPv6LL Mar 17 18:50:57.301252 systemd-networkd[1729]: lxcb08b80642101: Gained IPv6LL Mar 17 18:50:58.069969 systemd-networkd[1729]: lxc6cc34445ee26: Gained IPv6LL Mar 17 18:50:59.670261 env[1558]: time="2025-03-17T18:50:59.637301527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:59.670261 env[1558]: time="2025-03-17T18:50:59.637354485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:59.670261 env[1558]: time="2025-03-17T18:50:59.637364524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:59.670261 env[1558]: time="2025-03-17T18:50:59.637494599Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/002746f29a80520c8cff4f51f8986f3313822b54e107ace999c391b2b5a1f3f4 pid=3895 runtime=io.containerd.runc.v2 Mar 17 18:50:59.678354 env[1558]: time="2025-03-17T18:50:59.648682675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:59.678528 env[1558]: time="2025-03-17T18:50:59.648743752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:59.678528 env[1558]: time="2025-03-17T18:50:59.648764272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:59.678528 env[1558]: time="2025-03-17T18:50:59.648938984Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e3135b6630c002cd5d9b570aedd628198c42762e2de753fa671b78bfa4ea7f4c pid=3912 runtime=io.containerd.runc.v2 Mar 17 18:50:59.717224 env[1558]: time="2025-03-17T18:50:59.717166954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d79vc,Uid:89f12ad4-f515-4a09-907a-3e0aa3149f8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"002746f29a80520c8cff4f51f8986f3313822b54e107ace999c391b2b5a1f3f4\"" Mar 17 18:50:59.722583 env[1558]: time="2025-03-17T18:50:59.722456646Z" level=info msg="CreateContainer within sandbox \"002746f29a80520c8cff4f51f8986f3313822b54e107ace999c391b2b5a1f3f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:50:59.751021 env[1558]: time="2025-03-17T18:50:59.750976733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f7mr5,Uid:c2310f5d-5529-478c-b5da-a45ecf69b2d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3135b6630c002cd5d9b570aedd628198c42762e2de753fa671b78bfa4ea7f4c\"" Mar 17 18:50:59.762618 env[1558]: time="2025-03-17T18:50:59.762400119Z" level=info msg="CreateContainer within sandbox \"e3135b6630c002cd5d9b570aedd628198c42762e2de753fa671b78bfa4ea7f4c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:50:59.798777 env[1558]: time="2025-03-17T18:50:59.798721869Z" level=info msg="CreateContainer within sandbox \"002746f29a80520c8cff4f51f8986f3313822b54e107ace999c391b2b5a1f3f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"725507e705ce8bcc9b87959fbb03a3a7274a16ea5605105009263ca156c0a852\"" Mar 17 18:50:59.801086 env[1558]: time="2025-03-17T18:50:59.799416559Z" level=info msg="StartContainer for \"725507e705ce8bcc9b87959fbb03a3a7274a16ea5605105009263ca156c0a852\"" Mar 17 18:50:59.834600 env[1558]: time="2025-03-17T18:50:59.834551440Z" level=info msg="CreateContainer within sandbox \"e3135b6630c002cd5d9b570aedd628198c42762e2de753fa671b78bfa4ea7f4c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c2f7471aae4dd625095069cb66fdd721cb346e9ae1f4b27ceac78228cfd6aedf\"" Mar 17 18:50:59.836452 env[1558]: time="2025-03-17T18:50:59.836414479Z" level=info msg="StartContainer for \"c2f7471aae4dd625095069cb66fdd721cb346e9ae1f4b27ceac78228cfd6aedf\"" Mar 17 18:50:59.861201 env[1558]: time="2025-03-17T18:50:59.861148450Z" level=info msg="StartContainer for \"725507e705ce8bcc9b87959fbb03a3a7274a16ea5605105009263ca156c0a852\" returns successfully" Mar 17 18:50:59.919346 env[1558]: time="2025-03-17T18:50:59.919284736Z" level=info msg="StartContainer for \"c2f7471aae4dd625095069cb66fdd721cb346e9ae1f4b27ceac78228cfd6aedf\" returns successfully" Mar 17 18:51:00.331802 kubelet[2719]: I0317 18:51:00.331736 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-f7mr5" podStartSLOduration=25.33171558 podStartE2EDuration="25.33171558s" podCreationTimestamp="2025-03-17 18:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:51:00.313992655 +0000 UTC m=+41.259687239" watchObservedRunningTime="2025-03-17 18:51:00.33171558 +0000 UTC m=+41.277410164" Mar 17 18:51:00.332674 kubelet[2719]: I0317 18:51:00.332631 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-d79vc" podStartSLOduration=25.332619221 podStartE2EDuration="25.332619221s" podCreationTimestamp="2025-03-17 18:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:51:00.328744306 +0000 UTC m=+41.274438890" watchObservedRunningTime="2025-03-17 18:51:00.332619221 +0000 UTC m=+41.278313805" Mar 17 18:52:07.264608 update_engine[1546]: I0317 18:52:07.264559 1546 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 17 18:52:07.264608 update_engine[1546]: I0317 18:52:07.264601 1546 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 17 18:52:07.265103 update_engine[1546]: I0317 18:52:07.264734 1546 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 17 18:52:07.265207 update_engine[1546]: I0317 18:52:07.265178 1546 omaha_request_params.cc:62] Current group set to lts Mar 17 18:52:07.265432 update_engine[1546]: I0317 18:52:07.265285 1546 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 17 18:52:07.265432 update_engine[1546]: I0317 18:52:07.265295 1546 update_attempter.cc:643] Scheduling an action processor start. Mar 17 18:52:07.265432 update_engine[1546]: I0317 18:52:07.265311 1546 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 18:52:07.265432 update_engine[1546]: I0317 18:52:07.265332 1546 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 17 18:52:07.265756 locksmithd[1639]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 17 18:52:07.281149 update_engine[1546]: I0317 18:52:07.281111 1546 omaha_request_action.cc:270] Posting an Omaha request to disabled Mar 17 18:52:07.281149 update_engine[1546]: I0317 18:52:07.281140 1546 omaha_request_action.cc:271] Request: Mar 17 18:52:07.281149 update_engine[1546]: Mar 17 18:52:07.281149 update_engine[1546]: Mar 17 18:52:07.281149 update_engine[1546]: Mar 17 18:52:07.281149 update_engine[1546]: Mar 17 18:52:07.281149 update_engine[1546]: Mar 17 18:52:07.281149 update_engine[1546]: Mar 17 18:52:07.281149 update_engine[1546]: Mar 17 18:52:07.281149 update_engine[1546]: Mar 17 18:52:07.281149 update_engine[1546]: I0317 18:52:07.281149 1546 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:52:07.410513 update_engine[1546]: I0317 18:52:07.410468 1546 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:52:07.410780 update_engine[1546]: I0317 18:52:07.410758 1546 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:52:07.517170 update_engine[1546]: E0317 18:52:07.517055 1546 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:52:07.517294 update_engine[1546]: I0317 18:52:07.517184 1546 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 17 18:52:18.181391 update_engine[1546]: I0317 18:52:18.181347 1546 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:52:18.181775 update_engine[1546]: I0317 18:52:18.181556 1546 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:52:18.181775 update_engine[1546]: I0317 18:52:18.181748 1546 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:52:18.258542 update_engine[1546]: E0317 18:52:18.258503 1546 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:52:18.258700 update_engine[1546]: I0317 18:52:18.258620 1546 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 17 18:52:28.177844 update_engine[1546]: I0317 18:52:28.177793 1546 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:52:28.178195 update_engine[1546]: I0317 18:52:28.178017 1546 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:52:28.178228 update_engine[1546]: I0317 18:52:28.178198 1546 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:52:28.226764 update_engine[1546]: E0317 18:52:28.226727 1546 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:52:28.226918 update_engine[1546]: I0317 18:52:28.226859 1546 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 17 18:52:38.179783 update_engine[1546]: I0317 18:52:38.179714 1546 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:52:38.180233 update_engine[1546]: I0317 18:52:38.180001 1546 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:52:38.180233 update_engine[1546]: I0317 18:52:38.180179 1546 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:52:38.230392 update_engine[1546]: E0317 18:52:38.230352 1546 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:52:38.230539 update_engine[1546]: I0317 18:52:38.230456 1546 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 18:52:38.230539 update_engine[1546]: I0317 18:52:38.230463 1546 omaha_request_action.cc:621] Omaha request response: Mar 17 18:52:38.230589 update_engine[1546]: E0317 18:52:38.230541 1546 omaha_request_action.cc:640] Omaha request network transfer failed. Mar 17 18:52:38.230589 update_engine[1546]: I0317 18:52:38.230556 1546 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 17 18:52:38.230589 update_engine[1546]: I0317 18:52:38.230558 1546 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 18:52:38.230589 update_engine[1546]: I0317 18:52:38.230561 1546 update_attempter.cc:306] Processing Done. Mar 17 18:52:38.230589 update_engine[1546]: E0317 18:52:38.230571 1546 update_attempter.cc:619] Update failed. Mar 17 18:52:38.230589 update_engine[1546]: I0317 18:52:38.230574 1546 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 17 18:52:38.230589 update_engine[1546]: I0317 18:52:38.230577 1546 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 17 18:52:38.230589 update_engine[1546]: I0317 18:52:38.230580 1546 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 17 18:52:38.230761 update_engine[1546]: I0317 18:52:38.230646 1546 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 18:52:38.230761 update_engine[1546]: I0317 18:52:38.230665 1546 omaha_request_action.cc:270] Posting an Omaha request to disabled Mar 17 18:52:38.230761 update_engine[1546]: I0317 18:52:38.230669 1546 omaha_request_action.cc:271] Request: Mar 17 18:52:38.230761 update_engine[1546]: Mar 17 18:52:38.230761 update_engine[1546]: Mar 17 18:52:38.230761 update_engine[1546]: Mar 17 18:52:38.230761 update_engine[1546]: Mar 17 18:52:38.230761 update_engine[1546]: Mar 17 18:52:38.230761 update_engine[1546]: Mar 17 18:52:38.230761 update_engine[1546]: I0317 18:52:38.230672 1546 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:52:38.230980 update_engine[1546]: I0317 18:52:38.230808 1546 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:52:38.231005 update_engine[1546]: I0317 18:52:38.230979 1546 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:52:38.231306 locksmithd[1639]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 17 18:52:38.311548 update_engine[1546]: E0317 18:52:38.311511 1546 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:52:38.311687 update_engine[1546]: I0317 18:52:38.311625 1546 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 18:52:38.311687 update_engine[1546]: I0317 18:52:38.311632 1546 omaha_request_action.cc:621] Omaha request response: Mar 17 18:52:38.311687 update_engine[1546]: I0317 18:52:38.311637 1546 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 18:52:38.311687 update_engine[1546]: I0317 18:52:38.311640 1546 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 18:52:38.311687 update_engine[1546]: I0317 18:52:38.311643 1546 update_attempter.cc:306] Processing Done. Mar 17 18:52:38.311687 update_engine[1546]: I0317 18:52:38.311646 1546 update_attempter.cc:310] Error event sent. Mar 17 18:52:38.311687 update_engine[1546]: I0317 18:52:38.311654 1546 update_check_scheduler.cc:74] Next update check in 49m48s Mar 17 18:52:38.312006 locksmithd[1639]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 17 18:53:24.973962 systemd[1]: Started sshd@5-10.200.20.37:22-10.200.16.10:58082.service. Mar 17 18:53:25.386442 sshd[4067]: Accepted publickey for core from 10.200.16.10 port 58082 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:25.388181 sshd[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:25.392974 systemd[1]: Started session-8.scope. Mar 17 18:53:25.393470 systemd-logind[1542]: New session 8 of user core. Mar 17 18:53:25.821047 sshd[4067]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:25.823773 systemd-logind[1542]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:53:25.824080 systemd[1]: sshd@5-10.200.20.37:22-10.200.16.10:58082.service: Deactivated successfully. Mar 17 18:53:25.824925 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:53:25.826163 systemd-logind[1542]: Removed session 8. Mar 17 18:53:30.894650 systemd[1]: Started sshd@6-10.200.20.37:22-10.200.16.10:39942.service. Mar 17 18:53:31.344013 sshd[4093]: Accepted publickey for core from 10.200.16.10 port 39942 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:31.345383 sshd[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:31.349359 systemd-logind[1542]: New session 9 of user core. Mar 17 18:53:31.350109 systemd[1]: Started session-9.scope. Mar 17 18:53:31.732056 sshd[4093]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:31.734629 systemd[1]: sshd@6-10.200.20.37:22-10.200.16.10:39942.service: Deactivated successfully. Mar 17 18:53:31.735663 systemd-logind[1542]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:53:31.735732 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:53:31.736935 systemd-logind[1542]: Removed session 9. Mar 17 18:53:36.804307 systemd[1]: Started sshd@7-10.200.20.37:22-10.200.16.10:39956.service. Mar 17 18:53:37.253577 sshd[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:37.257950 systemd[1]: Started session-10.scope. Mar 17 18:53:37.479842 sshd[4108]: Accepted publickey for core from 10.200.16.10 port 39956 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:37.258346 systemd-logind[1542]: New session 10 of user core. Mar 17 18:53:37.654071 sshd[4108]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:37.657578 systemd-logind[1542]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:53:37.658061 systemd[1]: sshd@7-10.200.20.37:22-10.200.16.10:39956.service: Deactivated successfully. Mar 17 18:53:37.659116 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:53:37.660897 systemd-logind[1542]: Removed session 10. Mar 17 18:53:42.721122 systemd[1]: Started sshd@8-10.200.20.37:22-10.200.16.10:54610.service. Mar 17 18:53:43.132653 sshd[4125]: Accepted publickey for core from 10.200.16.10 port 54610 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:43.134109 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:43.139653 systemd[1]: Started session-11.scope. Mar 17 18:53:43.140848 systemd-logind[1542]: New session 11 of user core. Mar 17 18:53:43.493173 sshd[4125]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:43.496027 systemd[1]: sshd@8-10.200.20.37:22-10.200.16.10:54610.service: Deactivated successfully. Mar 17 18:53:43.496954 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:53:43.498169 systemd-logind[1542]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:53:43.499167 systemd-logind[1542]: Removed session 11. Mar 17 18:53:48.574210 systemd[1]: Started sshd@9-10.200.20.37:22-10.200.16.10:40256.service. Mar 17 18:53:49.043149 sshd[4141]: Accepted publickey for core from 10.200.16.10 port 40256 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:49.044869 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:49.049731 systemd[1]: Started session-12.scope. Mar 17 18:53:49.050885 systemd-logind[1542]: New session 12 of user core. Mar 17 18:53:49.456723 sshd[4141]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:49.459523 systemd[1]: sshd@9-10.200.20.37:22-10.200.16.10:40256.service: Deactivated successfully. Mar 17 18:53:49.460348 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:53:49.461118 systemd-logind[1542]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:53:49.461958 systemd-logind[1542]: Removed session 12. Mar 17 18:53:49.534472 systemd[1]: Started sshd@10-10.200.20.37:22-10.200.16.10:40268.service. Mar 17 18:53:50.004177 sshd[4154]: Accepted publickey for core from 10.200.16.10 port 40268 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:50.005517 sshd[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:50.010713 systemd-logind[1542]: New session 13 of user core. Mar 17 18:53:50.011376 systemd[1]: Started session-13.scope. Mar 17 18:53:50.459536 sshd[4154]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:50.463256 systemd[1]: sshd@10-10.200.20.37:22-10.200.16.10:40268.service: Deactivated successfully. Mar 17 18:53:50.464394 systemd-logind[1542]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:53:50.464488 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:53:50.465967 systemd-logind[1542]: Removed session 13. Mar 17 18:53:50.521885 systemd[1]: Started sshd@11-10.200.20.37:22-10.200.16.10:40270.service. Mar 17 18:53:50.933854 sshd[4164]: Accepted publickey for core from 10.200.16.10 port 40270 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:50.935123 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:50.939691 systemd[1]: Started session-14.scope. Mar 17 18:53:50.940489 systemd-logind[1542]: New session 14 of user core. Mar 17 18:53:51.310009 sshd[4164]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:51.312855 systemd-logind[1542]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:53:51.313598 systemd[1]: sshd@11-10.200.20.37:22-10.200.16.10:40270.service: Deactivated successfully. Mar 17 18:53:51.314493 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:53:51.315224 systemd-logind[1542]: Removed session 14. Mar 17 18:53:56.383381 systemd[1]: Started sshd@12-10.200.20.37:22-10.200.16.10:40274.service. Mar 17 18:53:56.831055 sshd[4176]: Accepted publickey for core from 10.200.16.10 port 40274 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:53:56.832761 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:53:56.837144 systemd[1]: Started session-15.scope. Mar 17 18:53:56.837722 systemd-logind[1542]: New session 15 of user core. Mar 17 18:53:57.233942 sshd[4176]: pam_unix(sshd:session): session closed for user core Mar 17 18:53:57.237035 systemd[1]: sshd@12-10.200.20.37:22-10.200.16.10:40274.service: Deactivated successfully. Mar 17 18:53:57.238174 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:53:57.238189 systemd-logind[1542]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:53:57.239330 systemd-logind[1542]: Removed session 15. Mar 17 18:54:02.308506 systemd[1]: Started sshd@13-10.200.20.37:22-10.200.16.10:35402.service. Mar 17 18:54:02.756168 sshd[4193]: Accepted publickey for core from 10.200.16.10 port 35402 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:54:02.757502 sshd[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:02.762003 systemd-logind[1542]: New session 16 of user core. Mar 17 18:54:02.762317 systemd[1]: Started session-16.scope. Mar 17 18:54:03.133597 sshd[4193]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:03.136945 systemd-logind[1542]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:54:03.137262 systemd[1]: sshd@13-10.200.20.37:22-10.200.16.10:35402.service: Deactivated successfully. Mar 17 18:54:03.138100 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:54:03.139248 systemd-logind[1542]: Removed session 16. Mar 17 18:54:03.213604 systemd[1]: Started sshd@14-10.200.20.37:22-10.200.16.10:35418.service. Mar 17 18:54:03.683863 sshd[4206]: Accepted publickey for core from 10.200.16.10 port 35418 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:54:03.685224 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:03.689102 systemd-logind[1542]: New session 17 of user core. Mar 17 18:54:03.689932 systemd[1]: Started session-17.scope. Mar 17 18:54:04.142043 sshd[4206]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:04.144387 systemd[1]: sshd@14-10.200.20.37:22-10.200.16.10:35418.service: Deactivated successfully. Mar 17 18:54:04.145341 systemd-logind[1542]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:54:04.145426 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:54:04.146788 systemd-logind[1542]: Removed session 17. Mar 17 18:54:04.211571 systemd[1]: Started sshd@15-10.200.20.37:22-10.200.16.10:35432.service. Mar 17 18:54:04.665336 sshd[4216]: Accepted publickey for core from 10.200.16.10 port 35432 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:54:04.667018 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:04.671376 systemd[1]: Started session-18.scope. Mar 17 18:54:04.672419 systemd-logind[1542]: New session 18 of user core. Mar 17 18:54:06.311032 sshd[4216]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:06.313994 systemd[1]: sshd@15-10.200.20.37:22-10.200.16.10:35432.service: Deactivated successfully. Mar 17 18:54:06.315400 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:54:06.316005 systemd-logind[1542]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:54:06.316943 systemd-logind[1542]: Removed session 18. Mar 17 18:54:06.377780 systemd[1]: Started sshd@16-10.200.20.37:22-10.200.16.10:35438.service. Mar 17 18:54:06.788674 sshd[4234]: Accepted publickey for core from 10.200.16.10 port 35438 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:54:06.790043 sshd[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:06.794492 systemd[1]: Started session-19.scope. Mar 17 18:54:06.794685 systemd-logind[1542]: New session 19 of user core. Mar 17 18:54:07.263092 sshd[4234]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:07.265771 systemd-logind[1542]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:54:07.266470 systemd[1]: sshd@16-10.200.20.37:22-10.200.16.10:35438.service: Deactivated successfully. Mar 17 18:54:07.267298 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:54:07.268256 systemd-logind[1542]: Removed session 19. Mar 17 18:54:07.329662 systemd[1]: Started sshd@17-10.200.20.37:22-10.200.16.10:35450.service. Mar 17 18:54:07.743060 sshd[4245]: Accepted publickey for core from 10.200.16.10 port 35450 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:54:07.744302 sshd[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:07.748007 systemd-logind[1542]: New session 20 of user core. Mar 17 18:54:07.748730 systemd[1]: Started session-20.scope. Mar 17 18:54:08.126784 sshd[4245]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:08.132082 systemd-logind[1542]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:54:08.133543 systemd[1]: sshd@17-10.200.20.37:22-10.200.16.10:35450.service: Deactivated successfully. Mar 17 18:54:08.134414 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:54:08.135253 systemd-logind[1542]: Removed session 20. Mar 17 18:54:13.201060 systemd[1]: Started sshd@18-10.200.20.37:22-10.200.16.10:57704.service. Mar 17 18:54:13.633313 sshd[4263]: Accepted publickey for core from 10.200.16.10 port 57704 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:54:13.635303 sshd[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:13.640087 systemd[1]: Started session-21.scope. Mar 17 18:54:13.640437 systemd-logind[1542]: New session 21 of user core. Mar 17 18:54:14.019885 sshd[4263]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:14.022508 systemd-logind[1542]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:54:14.023188 systemd[1]: sshd@18-10.200.20.37:22-10.200.16.10:57704.service: Deactivated successfully. Mar 17 18:54:14.024038 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:54:14.025131 systemd-logind[1542]: Removed session 21. Mar 17 18:54:19.097623 systemd[1]: Started sshd@19-10.200.20.37:22-10.200.16.10:50634.service. Mar 17 18:54:19.569223 sshd[4275]: Accepted publickey for core from 10.200.16.10 port 50634 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:54:19.570564 sshd[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:19.575115 systemd[1]: Started session-22.scope. Mar 17 18:54:19.575444 systemd-logind[1542]: New session 22 of user core. Mar 17 18:54:19.998044 sshd[4275]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:20.001074 systemd-logind[1542]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:54:20.001540 systemd[1]: sshd@19-10.200.20.37:22-10.200.16.10:50634.service: Deactivated successfully. Mar 17 18:54:20.002359 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:54:20.003767 systemd-logind[1542]: Removed session 22. Mar 17 18:54:25.061430 systemd[1]: Started sshd@20-10.200.20.37:22-10.200.16.10:50648.service. Mar 17 18:54:25.472363 sshd[4291]: Accepted publickey for core from 10.200.16.10 port 50648 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:54:25.474151 sshd[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:25.478625 systemd[1]: Started session-23.scope. Mar 17 18:54:25.479794 systemd-logind[1542]: New session 23 of user core. Mar 17 18:54:25.849240 sshd[4291]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:25.852263 systemd[1]: sshd@20-10.200.20.37:22-10.200.16.10:50648.service: Deactivated successfully. Mar 17 18:54:25.853785 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:54:25.854547 systemd-logind[1542]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:54:25.855550 systemd-logind[1542]: Removed session 23. Mar 17 18:54:25.923053 systemd[1]: Started sshd@21-10.200.20.37:22-10.200.16.10:50650.service. Mar 17 18:54:26.377897 sshd[4304]: Accepted publickey for core from 10.200.16.10 port 50650 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:54:26.379595 sshd[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:26.383587 systemd-logind[1542]: New session 24 of user core. Mar 17 18:54:26.384111 systemd[1]: Started session-24.scope. Mar 17 18:54:28.269932 systemd[1]: run-containerd-runc-k8s.io-999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc-runc.yiS6mh.mount: Deactivated successfully. Mar 17 18:54:28.287073 env[1558]: time="2025-03-17T18:54:28.286991185Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:54:28.292634 env[1558]: time="2025-03-17T18:54:28.292590648Z" level=info msg="StopContainer for \"999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc\" with timeout 2 (s)" Mar 17 18:54:28.293001 env[1558]: time="2025-03-17T18:54:28.292933404Z" level=info msg="Stop container \"999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc\" with signal terminated" Mar 17 18:54:28.300639 env[1558]: time="2025-03-17T18:54:28.300593445Z" level=info msg="StopContainer for \"64d1ddce60d70999298d378c301a26adcb5a5a9d3c728801a1daca4d2d30ca4e\" with timeout 30 (s)" Mar 17 18:54:28.301302 env[1558]: time="2025-03-17T18:54:28.301276638Z" level=info msg="Stop container \"64d1ddce60d70999298d378c301a26adcb5a5a9d3c728801a1daca4d2d30ca4e\" with signal terminated" Mar 17 18:54:28.306371 systemd-networkd[1729]: lxc_health: Link DOWN Mar 17 18:54:28.306378 systemd-networkd[1729]: lxc_health: Lost carrier Mar 17 18:54:28.345019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64d1ddce60d70999298d378c301a26adcb5a5a9d3c728801a1daca4d2d30ca4e-rootfs.mount: Deactivated successfully. Mar 17 18:54:28.356795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc-rootfs.mount: Deactivated successfully. Mar 17 18:54:28.386269 env[1558]: time="2025-03-17T18:54:28.386222886Z" level=info msg="shim disconnected" id=999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc Mar 17 18:54:28.386584 env[1558]: time="2025-03-17T18:54:28.386565002Z" level=warning msg="cleaning up after shim disconnected" id=999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc namespace=k8s.io Mar 17 18:54:28.386697 env[1558]: time="2025-03-17T18:54:28.386682361Z" level=info msg="cleaning up dead shim" Mar 17 18:54:28.386798 env[1558]: time="2025-03-17T18:54:28.386223006Z" level=info msg="shim disconnected" id=64d1ddce60d70999298d378c301a26adcb5a5a9d3c728801a1daca4d2d30ca4e Mar 17 18:54:28.386856 env[1558]: time="2025-03-17T18:54:28.386805240Z" level=warning msg="cleaning up after shim disconnected" id=64d1ddce60d70999298d378c301a26adcb5a5a9d3c728801a1daca4d2d30ca4e namespace=k8s.io Mar 17 18:54:28.386856 env[1558]: time="2025-03-17T18:54:28.386815000Z" level=info msg="cleaning up dead shim" Mar 17 18:54:28.394336 env[1558]: time="2025-03-17T18:54:28.394287083Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4373 runtime=io.containerd.runc.v2\n" Mar 17 18:54:28.395286 env[1558]: time="2025-03-17T18:54:28.395256953Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4374 runtime=io.containerd.runc.v2\n" Mar 17 18:54:28.406471 env[1558]: time="2025-03-17T18:54:28.406045922Z" level=info msg="StopContainer for \"64d1ddce60d70999298d378c301a26adcb5a5a9d3c728801a1daca4d2d30ca4e\" returns successfully" Mar 17 18:54:28.407584 env[1558]: time="2025-03-17T18:54:28.407155551Z" level=info msg="StopPodSandbox for \"646fc2a448ff6a675d130b9dd167c52020054a00284e3ae4c4a60c063ea3c795\"" Mar 17 18:54:28.407584 env[1558]: time="2025-03-17T18:54:28.407225590Z" level=info msg="Container to stop \"64d1ddce60d70999298d378c301a26adcb5a5a9d3c728801a1daca4d2d30ca4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:54:28.410929 env[1558]: time="2025-03-17T18:54:28.410889752Z" level=info msg="StopContainer for \"999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc\" returns successfully" Mar 17 18:54:28.412216 env[1558]: time="2025-03-17T18:54:28.412165459Z" level=info msg="StopPodSandbox for \"0e391217968bb089b703da88220a8a586079b00db70066d7324cbfa6180aa57c\"" Mar 17 18:54:28.412372 env[1558]: time="2025-03-17T18:54:28.412350417Z" level=info msg="Container to stop \"d8fab36f8d6b9a2679fd50e921c2640186beea54c6adf607d028c0805d566360\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:54:28.412452 env[1558]: time="2025-03-17T18:54:28.412433656Z" level=info msg="Container to stop \"b267f430d99b2c6de1529a0e6c622bc36b8fac069cc9a2b36a9c60b47c20ca5f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:54:28.412513 env[1558]: time="2025-03-17T18:54:28.412497576Z" level=info msg="Container to stop \"999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:54:28.412573 env[1558]: time="2025-03-17T18:54:28.412557375Z" level=info msg="Container to stop \"62262575ec5d2f9b8aacf1fa72bb49ce9b898389b43514d09c0113b8bcdc6ca1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:54:28.412636 env[1558]: time="2025-03-17T18:54:28.412619974Z" level=info msg="Container to stop \"55fcce3a859437d15a009807c98bc8be724a5d8b5ac1e33db2e4f3a5cf7dbbe9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:54:28.460860 env[1558]: time="2025-03-17T18:54:28.460799119Z" level=info msg="shim disconnected" id=0e391217968bb089b703da88220a8a586079b00db70066d7324cbfa6180aa57c Mar 17 18:54:28.461149 env[1558]: time="2025-03-17T18:54:28.461132956Z" level=warning msg="cleaning up after shim disconnected" id=0e391217968bb089b703da88220a8a586079b00db70066d7324cbfa6180aa57c namespace=k8s.io Mar 17 18:54:28.461242 env[1558]: time="2025-03-17T18:54:28.461211955Z" level=info msg="cleaning up dead shim" Mar 17 18:54:28.461508 env[1558]: time="2025-03-17T18:54:28.461467273Z" level=info msg="shim disconnected" id=646fc2a448ff6a675d130b9dd167c52020054a00284e3ae4c4a60c063ea3c795 Mar 17 18:54:28.461508 env[1558]: time="2025-03-17T18:54:28.461504272Z" level=warning msg="cleaning up after shim disconnected" id=646fc2a448ff6a675d130b9dd167c52020054a00284e3ae4c4a60c063ea3c795 namespace=k8s.io Mar 17 18:54:28.461595 env[1558]: time="2025-03-17T18:54:28.461513592Z" level=info msg="cleaning up dead shim" Mar 17 18:54:28.473030 env[1558]: time="2025-03-17T18:54:28.472979474Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4442 runtime=io.containerd.runc.v2\n" Mar 17 18:54:28.473349 env[1558]: time="2025-03-17T18:54:28.473307831Z" level=info msg="TearDown network for sandbox \"646fc2a448ff6a675d130b9dd167c52020054a00284e3ae4c4a60c063ea3c795\" successfully" Mar 17 18:54:28.473349 env[1558]: time="2025-03-17T18:54:28.473339711Z" level=info msg="StopPodSandbox for \"646fc2a448ff6a675d130b9dd167c52020054a00284e3ae4c4a60c063ea3c795\" returns successfully" Mar 17 18:54:28.478242 env[1558]: time="2025-03-17T18:54:28.478205981Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4441 runtime=io.containerd.runc.v2\n" Mar 17 18:54:28.482139 env[1558]: time="2025-03-17T18:54:28.481902383Z" level=info msg="TearDown network for sandbox \"0e391217968bb089b703da88220a8a586079b00db70066d7324cbfa6180aa57c\" successfully" Mar 17 18:54:28.482139 env[1558]: time="2025-03-17T18:54:28.481938782Z" level=info msg="StopPodSandbox for \"0e391217968bb089b703da88220a8a586079b00db70066d7324cbfa6180aa57c\" returns successfully" Mar 17 18:54:28.567867 kubelet[2719]: I0317 18:54:28.565705 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-host-proc-sys-kernel\") pod \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " Mar 17 18:54:28.567867 kubelet[2719]: I0317 18:54:28.565761 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ada4742e-e57f-46ea-bfe3-7e743ca4b565-cilium-config-path\") pod \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " Mar 17 18:54:28.567867 kubelet[2719]: I0317 18:54:28.565780 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51-cilium-config-path\") pod \"5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51\" (UID: \"5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51\") " Mar 17 18:54:28.567867 kubelet[2719]: I0317 18:54:28.565807 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ada4742e-e57f-46ea-bfe3-7e743ca4b565" (UID: "ada4742e-e57f-46ea-bfe3-7e743ca4b565"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:28.567867 kubelet[2719]: I0317 18:54:28.565815 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-598hw\" (UniqueName: \"kubernetes.io/projected/5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51-kube-api-access-598hw\") pod \"5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51\" (UID: \"5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51\") " Mar 17 18:54:28.567867 kubelet[2719]: I0317 18:54:28.565859 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-host-proc-sys-net\") pod \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " Mar 17 18:54:28.568780 kubelet[2719]: I0317 18:54:28.565883 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-hostproc\") pod \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " Mar 17 18:54:28.568780 kubelet[2719]: I0317 18:54:28.565921 2719 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-host-proc-sys-kernel\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:28.568780 kubelet[2719]: I0317 18:54:28.565944 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-hostproc" (OuterVolumeSpecName: "hostproc") pod "ada4742e-e57f-46ea-bfe3-7e743ca4b565" (UID: "ada4742e-e57f-46ea-bfe3-7e743ca4b565"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:28.568780 kubelet[2719]: I0317 18:54:28.566550 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ada4742e-e57f-46ea-bfe3-7e743ca4b565" (UID: "ada4742e-e57f-46ea-bfe3-7e743ca4b565"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:28.568780 kubelet[2719]: I0317 18:54:28.567746 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51" (UID: "5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:54:28.569449 kubelet[2719]: I0317 18:54:28.569400 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ada4742e-e57f-46ea-bfe3-7e743ca4b565-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ada4742e-e57f-46ea-bfe3-7e743ca4b565" (UID: "ada4742e-e57f-46ea-bfe3-7e743ca4b565"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:54:28.571589 kubelet[2719]: I0317 18:54:28.571547 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51-kube-api-access-598hw" (OuterVolumeSpecName: "kube-api-access-598hw") pod "5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51" (UID: "5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51"). InnerVolumeSpecName "kube-api-access-598hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:54:28.663483 kubelet[2719]: I0317 18:54:28.663446 2719 scope.go:117] "RemoveContainer" containerID="999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc" Mar 17 18:54:28.666666 kubelet[2719]: I0317 18:54:28.666648 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-lib-modules\") pod \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " Mar 17 18:54:28.667278 kubelet[2719]: I0317 18:54:28.667220 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-etc-cni-netd\") pod \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " Mar 17 18:54:28.667379 env[1558]: time="2025-03-17T18:54:28.667207639Z" level=info msg="RemoveContainer for \"999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc\"" Mar 17 18:54:28.668026 kubelet[2719]: I0317 18:54:28.668005 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4j5l\" (UniqueName: \"kubernetes.io/projected/ada4742e-e57f-46ea-bfe3-7e743ca4b565-kube-api-access-x4j5l\") pod \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " Mar 17 18:54:28.668206 kubelet[2719]: I0317 18:54:28.668193 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-bpf-maps\") pod \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " Mar 17 18:54:28.668330 kubelet[2719]: I0317 18:54:28.668319 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-cni-path\") pod \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " Mar 17 18:54:28.668406 kubelet[2719]: I0317 18:54:28.668396 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-xtables-lock\") pod \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " Mar 17 18:54:28.668516 kubelet[2719]: I0317 18:54:28.668506 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ada4742e-e57f-46ea-bfe3-7e743ca4b565-hubble-tls\") pod \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " Mar 17 18:54:28.668633 kubelet[2719]: I0317 18:54:28.668592 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ada4742e-e57f-46ea-bfe3-7e743ca4b565-clustermesh-secrets\") pod \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " Mar 17 18:54:28.668716 kubelet[2719]: I0317 18:54:28.668706 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-cilium-cgroup\") pod \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " Mar 17 18:54:28.668859 kubelet[2719]: I0317 18:54:28.668838 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-cilium-run\") pod \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\" (UID: \"ada4742e-e57f-46ea-bfe3-7e743ca4b565\") " Mar 17 18:54:28.668993 kubelet[2719]: I0317 18:54:28.668972 2719 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ada4742e-e57f-46ea-bfe3-7e743ca4b565-cilium-config-path\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:28.669292 kubelet[2719]: I0317 18:54:28.669275 2719 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51-cilium-config-path\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:28.669441 kubelet[2719]: I0317 18:54:28.669428 2719 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-598hw\" (UniqueName: \"kubernetes.io/projected/5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51-kube-api-access-598hw\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:28.669521 kubelet[2719]: I0317 18:54:28.669509 2719 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-host-proc-sys-net\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:28.669595 kubelet[2719]: I0317 18:54:28.669582 2719 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-hostproc\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:28.672977 kubelet[2719]: I0317 18:54:28.666840 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ada4742e-e57f-46ea-bfe3-7e743ca4b565" (UID: "ada4742e-e57f-46ea-bfe3-7e743ca4b565"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:28.673111 kubelet[2719]: I0317 18:54:28.668865 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ada4742e-e57f-46ea-bfe3-7e743ca4b565" (UID: "ada4742e-e57f-46ea-bfe3-7e743ca4b565"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:28.673226 kubelet[2719]: I0317 18:54:28.668992 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ada4742e-e57f-46ea-bfe3-7e743ca4b565" (UID: "ada4742e-e57f-46ea-bfe3-7e743ca4b565"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:28.673304 kubelet[2719]: I0317 18:54:28.669719 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ada4742e-e57f-46ea-bfe3-7e743ca4b565" (UID: "ada4742e-e57f-46ea-bfe3-7e743ca4b565"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:28.673367 kubelet[2719]: I0317 18:54:28.669733 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-cni-path" (OuterVolumeSpecName: "cni-path") pod "ada4742e-e57f-46ea-bfe3-7e743ca4b565" (UID: "ada4742e-e57f-46ea-bfe3-7e743ca4b565"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:28.673896 kubelet[2719]: I0317 18:54:28.672920 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ada4742e-e57f-46ea-bfe3-7e743ca4b565" (UID: "ada4742e-e57f-46ea-bfe3-7e743ca4b565"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:28.674040 kubelet[2719]: I0317 18:54:28.672943 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ada4742e-e57f-46ea-bfe3-7e743ca4b565" (UID: "ada4742e-e57f-46ea-bfe3-7e743ca4b565"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:28.677440 kubelet[2719]: I0317 18:54:28.677409 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ada4742e-e57f-46ea-bfe3-7e743ca4b565-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ada4742e-e57f-46ea-bfe3-7e743ca4b565" (UID: "ada4742e-e57f-46ea-bfe3-7e743ca4b565"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:54:28.680709 kubelet[2719]: I0317 18:54:28.680682 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ada4742e-e57f-46ea-bfe3-7e743ca4b565-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ada4742e-e57f-46ea-bfe3-7e743ca4b565" (UID: "ada4742e-e57f-46ea-bfe3-7e743ca4b565"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:54:28.681103 kubelet[2719]: I0317 18:54:28.681036 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ada4742e-e57f-46ea-bfe3-7e743ca4b565-kube-api-access-x4j5l" (OuterVolumeSpecName: "kube-api-access-x4j5l") pod "ada4742e-e57f-46ea-bfe3-7e743ca4b565" (UID: "ada4742e-e57f-46ea-bfe3-7e743ca4b565"). InnerVolumeSpecName "kube-api-access-x4j5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:54:28.685575 env[1558]: time="2025-03-17T18:54:28.685388132Z" level=info msg="RemoveContainer for \"999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc\" returns successfully" Mar 17 18:54:28.686613 kubelet[2719]: I0317 18:54:28.686587 2719 scope.go:117] "RemoveContainer" containerID="55fcce3a859437d15a009807c98bc8be724a5d8b5ac1e33db2e4f3a5cf7dbbe9" Mar 17 18:54:28.688862 env[1558]: time="2025-03-17T18:54:28.688739978Z" level=info msg="RemoveContainer for \"55fcce3a859437d15a009807c98bc8be724a5d8b5ac1e33db2e4f3a5cf7dbbe9\"" Mar 17 18:54:28.700053 env[1558]: time="2025-03-17T18:54:28.699976702Z" level=info msg="RemoveContainer for \"55fcce3a859437d15a009807c98bc8be724a5d8b5ac1e33db2e4f3a5cf7dbbe9\" returns successfully" Mar 17 18:54:28.700436 kubelet[2719]: I0317 18:54:28.700400 2719 scope.go:117] "RemoveContainer" containerID="b267f430d99b2c6de1529a0e6c622bc36b8fac069cc9a2b36a9c60b47c20ca5f" Mar 17 18:54:28.701946 env[1558]: time="2025-03-17T18:54:28.701888282Z" level=info msg="RemoveContainer for \"b267f430d99b2c6de1529a0e6c622bc36b8fac069cc9a2b36a9c60b47c20ca5f\"" Mar 17 18:54:28.720485 env[1558]: time="2025-03-17T18:54:28.720414412Z" level=info msg="RemoveContainer for \"b267f430d99b2c6de1529a0e6c622bc36b8fac069cc9a2b36a9c60b47c20ca5f\" returns successfully" Mar 17 18:54:28.720901 kubelet[2719]: I0317 18:54:28.720880 2719 scope.go:117] "RemoveContainer" containerID="d8fab36f8d6b9a2679fd50e921c2640186beea54c6adf607d028c0805d566360" Mar 17 18:54:28.722435 env[1558]: time="2025-03-17T18:54:28.722399872Z" level=info msg="RemoveContainer for \"d8fab36f8d6b9a2679fd50e921c2640186beea54c6adf607d028c0805d566360\"" Mar 17 18:54:28.737563 env[1558]: time="2025-03-17T18:54:28.737515516Z" level=info msg="RemoveContainer for \"d8fab36f8d6b9a2679fd50e921c2640186beea54c6adf607d028c0805d566360\" returns successfully" Mar 17 18:54:28.737884 kubelet[2719]: I0317 18:54:28.737814 2719 scope.go:117] "RemoveContainer" containerID="62262575ec5d2f9b8aacf1fa72bb49ce9b898389b43514d09c0113b8bcdc6ca1" Mar 17 18:54:28.739380 env[1558]: time="2025-03-17T18:54:28.739317258Z" level=info msg="RemoveContainer for \"62262575ec5d2f9b8aacf1fa72bb49ce9b898389b43514d09c0113b8bcdc6ca1\"" Mar 17 18:54:28.753950 env[1558]: time="2025-03-17T18:54:28.753865508Z" level=info msg="RemoveContainer for \"62262575ec5d2f9b8aacf1fa72bb49ce9b898389b43514d09c0113b8bcdc6ca1\" returns successfully" Mar 17 18:54:28.754331 kubelet[2719]: I0317 18:54:28.754278 2719 scope.go:117] "RemoveContainer" containerID="999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc" Mar 17 18:54:28.754946 env[1558]: time="2025-03-17T18:54:28.754858258Z" level=error msg="ContainerStatus for \"999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc\": not found" Mar 17 18:54:28.755120 kubelet[2719]: E0317 18:54:28.755092 2719 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc\": not found" containerID="999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc" Mar 17 18:54:28.755292 kubelet[2719]: I0317 18:54:28.755167 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc"} err="failed to get container status \"999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"999983f08f65f97d9b0fae031ae466b9c70d8a72489a13e2d802252db832e9dc\": not found" Mar 17 18:54:28.755334 kubelet[2719]: I0317 18:54:28.755292 2719 scope.go:117] "RemoveContainer" containerID="55fcce3a859437d15a009807c98bc8be724a5d8b5ac1e33db2e4f3a5cf7dbbe9" Mar 17 18:54:28.755606 env[1558]: time="2025-03-17T18:54:28.755516811Z" level=error msg="ContainerStatus for \"55fcce3a859437d15a009807c98bc8be724a5d8b5ac1e33db2e4f3a5cf7dbbe9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55fcce3a859437d15a009807c98bc8be724a5d8b5ac1e33db2e4f3a5cf7dbbe9\": not found" Mar 17 18:54:28.755736 kubelet[2719]: E0317 18:54:28.755695 2719 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55fcce3a859437d15a009807c98bc8be724a5d8b5ac1e33db2e4f3a5cf7dbbe9\": not found" containerID="55fcce3a859437d15a009807c98bc8be724a5d8b5ac1e33db2e4f3a5cf7dbbe9" Mar 17 18:54:28.755787 kubelet[2719]: I0317 18:54:28.755743 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55fcce3a859437d15a009807c98bc8be724a5d8b5ac1e33db2e4f3a5cf7dbbe9"} err="failed to get container status \"55fcce3a859437d15a009807c98bc8be724a5d8b5ac1e33db2e4f3a5cf7dbbe9\": rpc error: code = NotFound desc = an error occurred when try to find container \"55fcce3a859437d15a009807c98bc8be724a5d8b5ac1e33db2e4f3a5cf7dbbe9\": not found" Mar 17 18:54:28.755787 kubelet[2719]: I0317 18:54:28.755762 2719 scope.go:117] "RemoveContainer" containerID="b267f430d99b2c6de1529a0e6c622bc36b8fac069cc9a2b36a9c60b47c20ca5f" Mar 17 18:54:28.756044 env[1558]: time="2025-03-17T18:54:28.755996887Z" level=error msg="ContainerStatus for \"b267f430d99b2c6de1529a0e6c622bc36b8fac069cc9a2b36a9c60b47c20ca5f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b267f430d99b2c6de1529a0e6c622bc36b8fac069cc9a2b36a9c60b47c20ca5f\": not found" Mar 17 18:54:28.756223 kubelet[2719]: E0317 18:54:28.756137 2719 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b267f430d99b2c6de1529a0e6c622bc36b8fac069cc9a2b36a9c60b47c20ca5f\": not found" containerID="b267f430d99b2c6de1529a0e6c622bc36b8fac069cc9a2b36a9c60b47c20ca5f" Mar 17 18:54:28.756253 kubelet[2719]: I0317 18:54:28.756231 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b267f430d99b2c6de1529a0e6c622bc36b8fac069cc9a2b36a9c60b47c20ca5f"} err="failed to get container status \"b267f430d99b2c6de1529a0e6c622bc36b8fac069cc9a2b36a9c60b47c20ca5f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b267f430d99b2c6de1529a0e6c622bc36b8fac069cc9a2b36a9c60b47c20ca5f\": not found" Mar 17 18:54:28.756253 kubelet[2719]: I0317 18:54:28.756247 2719 scope.go:117] "RemoveContainer" containerID="d8fab36f8d6b9a2679fd50e921c2640186beea54c6adf607d028c0805d566360" Mar 17 18:54:28.756652 env[1558]: time="2025-03-17T18:54:28.756604040Z" level=error msg="ContainerStatus for \"d8fab36f8d6b9a2679fd50e921c2640186beea54c6adf607d028c0805d566360\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8fab36f8d6b9a2679fd50e921c2640186beea54c6adf607d028c0805d566360\": not found" Mar 17 18:54:28.756806 kubelet[2719]: E0317 18:54:28.756782 2719 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8fab36f8d6b9a2679fd50e921c2640186beea54c6adf607d028c0805d566360\": not found" containerID="d8fab36f8d6b9a2679fd50e921c2640186beea54c6adf607d028c0805d566360" Mar 17 18:54:28.756855 kubelet[2719]: I0317 18:54:28.756808 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d8fab36f8d6b9a2679fd50e921c2640186beea54c6adf607d028c0805d566360"} err="failed to get container status \"d8fab36f8d6b9a2679fd50e921c2640186beea54c6adf607d028c0805d566360\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8fab36f8d6b9a2679fd50e921c2640186beea54c6adf607d028c0805d566360\": not found" Mar 17 18:54:28.756855 kubelet[2719]: I0317 18:54:28.756840 2719 scope.go:117] "RemoveContainer" containerID="62262575ec5d2f9b8aacf1fa72bb49ce9b898389b43514d09c0113b8bcdc6ca1" Mar 17 18:54:28.757086 env[1558]: time="2025-03-17T18:54:28.757038076Z" level=error msg="ContainerStatus for \"62262575ec5d2f9b8aacf1fa72bb49ce9b898389b43514d09c0113b8bcdc6ca1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62262575ec5d2f9b8aacf1fa72bb49ce9b898389b43514d09c0113b8bcdc6ca1\": not found" Mar 17 18:54:28.757237 kubelet[2719]: E0317 18:54:28.757214 2719 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"62262575ec5d2f9b8aacf1fa72bb49ce9b898389b43514d09c0113b8bcdc6ca1\": not found" containerID="62262575ec5d2f9b8aacf1fa72bb49ce9b898389b43514d09c0113b8bcdc6ca1" Mar 17 18:54:28.757292 kubelet[2719]: I0317 18:54:28.757271 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"62262575ec5d2f9b8aacf1fa72bb49ce9b898389b43514d09c0113b8bcdc6ca1"} err="failed to get container status \"62262575ec5d2f9b8aacf1fa72bb49ce9b898389b43514d09c0113b8bcdc6ca1\": rpc error: code = NotFound desc = an error occurred when try to find container \"62262575ec5d2f9b8aacf1fa72bb49ce9b898389b43514d09c0113b8bcdc6ca1\": not found" Mar 17 18:54:28.757292 kubelet[2719]: I0317 18:54:28.757291 2719 scope.go:117] "RemoveContainer" containerID="64d1ddce60d70999298d378c301a26adcb5a5a9d3c728801a1daca4d2d30ca4e" Mar 17 18:54:28.758556 env[1558]: time="2025-03-17T18:54:28.758519581Z" level=info msg="RemoveContainer for \"64d1ddce60d70999298d378c301a26adcb5a5a9d3c728801a1daca4d2d30ca4e\"" Mar 17 18:54:28.770381 env[1558]: time="2025-03-17T18:54:28.770335379Z" level=info msg="RemoveContainer for \"64d1ddce60d70999298d378c301a26adcb5a5a9d3c728801a1daca4d2d30ca4e\" returns successfully" Mar 17 18:54:28.770660 kubelet[2719]: I0317 18:54:28.770636 2719 scope.go:117] "RemoveContainer" containerID="64d1ddce60d70999298d378c301a26adcb5a5a9d3c728801a1daca4d2d30ca4e" Mar 17 18:54:28.771079 env[1558]: time="2025-03-17T18:54:28.770981453Z" level=error msg="ContainerStatus for \"64d1ddce60d70999298d378c301a26adcb5a5a9d3c728801a1daca4d2d30ca4e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64d1ddce60d70999298d378c301a26adcb5a5a9d3c728801a1daca4d2d30ca4e\": not found" Mar 17 18:54:28.771212 kubelet[2719]: E0317 18:54:28.771154 2719 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64d1ddce60d70999298d378c301a26adcb5a5a9d3c728801a1daca4d2d30ca4e\": not found" containerID="64d1ddce60d70999298d378c301a26adcb5a5a9d3c728801a1daca4d2d30ca4e" Mar 17 18:54:28.771256 kubelet[2719]: I0317 18:54:28.771215 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"64d1ddce60d70999298d378c301a26adcb5a5a9d3c728801a1daca4d2d30ca4e"} err="failed to get container status \"64d1ddce60d70999298d378c301a26adcb5a5a9d3c728801a1daca4d2d30ca4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"64d1ddce60d70999298d378c301a26adcb5a5a9d3c728801a1daca4d2d30ca4e\": not found" Mar 17 18:54:28.773536 kubelet[2719]: I0317 18:54:28.773515 2719 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-cilium-run\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:28.773574 kubelet[2719]: I0317 18:54:28.773542 2719 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-cilium-cgroup\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:28.773574 kubelet[2719]: I0317 18:54:28.773552 2719 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-lib-modules\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:28.773574 kubelet[2719]: I0317 18:54:28.773561 2719 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-etc-cni-netd\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:28.773648 kubelet[2719]: I0317 18:54:28.773578 2719 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-x4j5l\" (UniqueName: \"kubernetes.io/projected/ada4742e-e57f-46ea-bfe3-7e743ca4b565-kube-api-access-x4j5l\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:28.773648 kubelet[2719]: I0317 18:54:28.773587 2719 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-bpf-maps\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:28.773648 kubelet[2719]: I0317 18:54:28.773599 2719 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-cni-path\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:28.773648 kubelet[2719]: I0317 18:54:28.773607 2719 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ada4742e-e57f-46ea-bfe3-7e743ca4b565-xtables-lock\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:28.773648 kubelet[2719]: I0317 18:54:28.773618 2719 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ada4742e-e57f-46ea-bfe3-7e743ca4b565-hubble-tls\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:28.773648 kubelet[2719]: I0317 18:54:28.773626 2719 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ada4742e-e57f-46ea-bfe3-7e743ca4b565-clustermesh-secrets\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:29.195639 kubelet[2719]: I0317 18:54:29.195603 2719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51" path="/var/lib/kubelet/pods/5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51/volumes" Mar 17 18:54:29.196256 kubelet[2719]: I0317 18:54:29.196238 2719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ada4742e-e57f-46ea-bfe3-7e743ca4b565" path="/var/lib/kubelet/pods/ada4742e-e57f-46ea-bfe3-7e743ca4b565/volumes" Mar 17 18:54:29.262620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-646fc2a448ff6a675d130b9dd167c52020054a00284e3ae4c4a60c063ea3c795-rootfs.mount: Deactivated successfully. Mar 17 18:54:29.262772 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-646fc2a448ff6a675d130b9dd167c52020054a00284e3ae4c4a60c063ea3c795-shm.mount: Deactivated successfully. Mar 17 18:54:29.262880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e391217968bb089b703da88220a8a586079b00db70066d7324cbfa6180aa57c-rootfs.mount: Deactivated successfully. Mar 17 18:54:29.262960 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e391217968bb089b703da88220a8a586079b00db70066d7324cbfa6180aa57c-shm.mount: Deactivated successfully. Mar 17 18:54:29.263041 systemd[1]: var-lib-kubelet-pods-5ae7de95\x2dcbe9\x2d4ec1\x2dac7e\x2d1ec1982c3a51-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d598hw.mount: Deactivated successfully. Mar 17 18:54:29.263122 systemd[1]: var-lib-kubelet-pods-ada4742e\x2de57f\x2d46ea\x2dbfe3\x2d7e743ca4b565-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx4j5l.mount: Deactivated successfully. Mar 17 18:54:29.263207 systemd[1]: var-lib-kubelet-pods-ada4742e\x2de57f\x2d46ea\x2dbfe3\x2d7e743ca4b565-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:54:29.263285 systemd[1]: var-lib-kubelet-pods-ada4742e\x2de57f\x2d46ea\x2dbfe3\x2d7e743ca4b565-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:54:29.416066 kubelet[2719]: E0317 18:54:29.416020 2719 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:54:30.292170 sshd[4304]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:30.294959 systemd[1]: sshd@21-10.200.20.37:22-10.200.16.10:50650.service: Deactivated successfully. Mar 17 18:54:30.296027 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:54:30.296042 systemd-logind[1542]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:54:30.297411 systemd-logind[1542]: Removed session 24. Mar 17 18:54:30.365440 systemd[1]: Started sshd@22-10.200.20.37:22-10.200.16.10:56026.service. Mar 17 18:54:30.820177 sshd[4474]: Accepted publickey for core from 10.200.16.10 port 56026 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:54:30.822225 sshd[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:30.827482 systemd[1]: Started session-25.scope. Mar 17 18:54:30.828756 systemd-logind[1542]: New session 25 of user core. Mar 17 18:54:32.371611 kubelet[2719]: I0317 18:54:32.371572 2719 topology_manager.go:215] "Topology Admit Handler" podUID="5e8e4307-8e0f-4b7f-973b-62dcacccdee1" podNamespace="kube-system" podName="cilium-crmfj" Mar 17 18:54:32.372068 kubelet[2719]: E0317 18:54:32.372053 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ada4742e-e57f-46ea-bfe3-7e743ca4b565" containerName="mount-cgroup" Mar 17 18:54:32.372155 kubelet[2719]: E0317 18:54:32.372145 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ada4742e-e57f-46ea-bfe3-7e743ca4b565" containerName="apply-sysctl-overwrites" Mar 17 18:54:32.372216 kubelet[2719]: E0317 18:54:32.372199 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51" containerName="cilium-operator" Mar 17 18:54:32.372272 kubelet[2719]: E0317 18:54:32.372263 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ada4742e-e57f-46ea-bfe3-7e743ca4b565" containerName="cilium-agent" Mar 17 18:54:32.372333 kubelet[2719]: E0317 18:54:32.372318 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ada4742e-e57f-46ea-bfe3-7e743ca4b565" containerName="mount-bpf-fs" Mar 17 18:54:32.372388 kubelet[2719]: E0317 18:54:32.372378 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ada4742e-e57f-46ea-bfe3-7e743ca4b565" containerName="clean-cilium-state" Mar 17 18:54:32.372472 kubelet[2719]: I0317 18:54:32.372462 2719 memory_manager.go:354] "RemoveStaleState removing state" podUID="ada4742e-e57f-46ea-bfe3-7e743ca4b565" containerName="cilium-agent" Mar 17 18:54:32.372541 kubelet[2719]: I0317 18:54:32.372521 2719 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ae7de95-cbe9-4ec1-ac7e-1ec1982c3a51" containerName="cilium-operator" Mar 17 18:54:32.378754 kubelet[2719]: W0317 18:54:32.378730 2719 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.7-a-2597755324" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-a-2597755324' and this object Mar 17 18:54:32.378949 kubelet[2719]: E0317 18:54:32.378935 2719 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.7-a-2597755324" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-a-2597755324' and this object Mar 17 18:54:32.379568 kubelet[2719]: W0317 18:54:32.379228 2719 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.7-a-2597755324" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-a-2597755324' and this object Mar 17 18:54:32.379568 kubelet[2719]: E0317 18:54:32.379254 2719 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.7-a-2597755324" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-a-2597755324' and this object Mar 17 18:54:32.379568 kubelet[2719]: W0317 18:54:32.379293 2719 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.7-a-2597755324" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-a-2597755324' and this object Mar 17 18:54:32.379568 kubelet[2719]: E0317 18:54:32.379301 2719 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.7-a-2597755324" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-a-2597755324' and this object Mar 17 18:54:32.379568 kubelet[2719]: W0317 18:54:32.379330 2719 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.7-a-2597755324" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-a-2597755324' and this object Mar 17 18:54:32.379740 kubelet[2719]: E0317 18:54:32.379339 2719 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.7-a-2597755324" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-a-2597755324' and this object Mar 17 18:54:32.395792 kubelet[2719]: I0317 18:54:32.395760 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-clustermesh-secrets\") pod \"cilium-crmfj\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " pod="kube-system/cilium-crmfj" Mar 17 18:54:32.396004 kubelet[2719]: I0317 18:54:32.395990 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cilium-cgroup\") pod \"cilium-crmfj\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " pod="kube-system/cilium-crmfj" Mar 17 18:54:32.396093 kubelet[2719]: I0317 18:54:32.396082 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-etc-cni-netd\") pod \"cilium-crmfj\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " pod="kube-system/cilium-crmfj" Mar 17 18:54:32.396173 kubelet[2719]: I0317 18:54:32.396161 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cilium-ipsec-secrets\") pod \"cilium-crmfj\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " pod="kube-system/cilium-crmfj" Mar 17 18:54:32.396248 kubelet[2719]: I0317 18:54:32.396235 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxh9m\" (UniqueName: \"kubernetes.io/projected/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-kube-api-access-gxh9m\") pod \"cilium-crmfj\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " pod="kube-system/cilium-crmfj" Mar 17 18:54:32.396323 kubelet[2719]: I0317 18:54:32.396312 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cilium-run\") pod \"cilium-crmfj\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " pod="kube-system/cilium-crmfj" Mar 17 18:54:32.396400 kubelet[2719]: I0317 18:54:32.396389 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-hostproc\") pod \"cilium-crmfj\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " pod="kube-system/cilium-crmfj" Mar 17 18:54:32.396474 kubelet[2719]: I0317 18:54:32.396461 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-xtables-lock\") pod \"cilium-crmfj\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " pod="kube-system/cilium-crmfj" Mar 17 18:54:32.396549 kubelet[2719]: I0317 18:54:32.396536 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-host-proc-sys-kernel\") pod \"cilium-crmfj\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " pod="kube-system/cilium-crmfj" Mar 17 18:54:32.396626 kubelet[2719]: I0317 18:54:32.396615 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-lib-modules\") pod \"cilium-crmfj\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " pod="kube-system/cilium-crmfj" Mar 17 18:54:32.396711 kubelet[2719]: I0317 18:54:32.396696 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cilium-config-path\") pod \"cilium-crmfj\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " pod="kube-system/cilium-crmfj" Mar 17 18:54:32.396798 kubelet[2719]: I0317 18:54:32.396786 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-host-proc-sys-net\") pod \"cilium-crmfj\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " pod="kube-system/cilium-crmfj" Mar 17 18:54:32.396894 kubelet[2719]: I0317 18:54:32.396881 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-hubble-tls\") pod \"cilium-crmfj\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " pod="kube-system/cilium-crmfj" Mar 17 18:54:32.396981 kubelet[2719]: I0317 18:54:32.396967 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-bpf-maps\") pod \"cilium-crmfj\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " pod="kube-system/cilium-crmfj" Mar 17 18:54:32.397059 kubelet[2719]: I0317 18:54:32.397047 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cni-path\") pod \"cilium-crmfj\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " pod="kube-system/cilium-crmfj" Mar 17 18:54:32.408515 sshd[4474]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:32.412861 systemd[1]: sshd@22-10.200.20.37:22-10.200.16.10:56026.service: Deactivated successfully. Mar 17 18:54:32.414595 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 18:54:32.415360 systemd-logind[1542]: Session 25 logged out. Waiting for processes to exit. Mar 17 18:54:32.416546 systemd-logind[1542]: Removed session 25. Mar 17 18:54:32.483366 systemd[1]: Started sshd@23-10.200.20.37:22-10.200.16.10:56040.service. Mar 17 18:54:32.938005 sshd[4486]: Accepted publickey for core from 10.200.16.10 port 56040 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:54:32.939399 sshd[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:32.944808 systemd[1]: Started session-26.scope. Mar 17 18:54:32.945186 systemd-logind[1542]: New session 26 of user core. Mar 17 18:54:33.302281 kubelet[2719]: E0317 18:54:33.302234 2719 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path cilium-ipsec-secrets clustermesh-secrets], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-crmfj" podUID="5e8e4307-8e0f-4b7f-973b-62dcacccdee1" Mar 17 18:54:33.355180 sshd[4486]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:33.358501 systemd[1]: sshd@23-10.200.20.37:22-10.200.16.10:56040.service: Deactivated successfully. Mar 17 18:54:33.359873 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 18:54:33.360239 systemd-logind[1542]: Session 26 logged out. Waiting for processes to exit. Mar 17 18:54:33.361198 systemd-logind[1542]: Removed session 26. Mar 17 18:54:33.428210 systemd[1]: Started sshd@24-10.200.20.37:22-10.200.16.10:56050.service. Mar 17 18:54:33.499359 kubelet[2719]: E0317 18:54:33.499324 2719 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Mar 17 18:54:33.499878 kubelet[2719]: E0317 18:54:33.499751 2719 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:54:33.499878 kubelet[2719]: E0317 18:54:33.499324 2719 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Mar 17 18:54:33.499878 kubelet[2719]: E0317 18:54:33.499806 2719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-clustermesh-secrets podName:5e8e4307-8e0f-4b7f-973b-62dcacccdee1 nodeName:}" failed. No retries permitted until 2025-03-17 18:54:33.999784137 +0000 UTC m=+254.945478721 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-clustermesh-secrets") pod "cilium-crmfj" (UID: "5e8e4307-8e0f-4b7f-973b-62dcacccdee1") : failed to sync secret cache: timed out waiting for the condition Mar 17 18:54:33.499878 kubelet[2719]: E0317 18:54:33.499848 2719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cilium-config-path podName:5e8e4307-8e0f-4b7f-973b-62dcacccdee1 nodeName:}" failed. No retries permitted until 2025-03-17 18:54:33.999841136 +0000 UTC m=+254.945535720 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cilium-config-path") pod "cilium-crmfj" (UID: "5e8e4307-8e0f-4b7f-973b-62dcacccdee1") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:54:33.500081 kubelet[2719]: E0317 18:54:33.499861 2719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cilium-ipsec-secrets podName:5e8e4307-8e0f-4b7f-973b-62dcacccdee1 nodeName:}" failed. No retries permitted until 2025-03-17 18:54:33.999855496 +0000 UTC m=+254.945550040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cilium-ipsec-secrets") pod "cilium-crmfj" (UID: "5e8e4307-8e0f-4b7f-973b-62dcacccdee1") : failed to sync secret cache: timed out waiting for the condition Mar 17 18:54:33.710463 kubelet[2719]: I0317 18:54:33.707046 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxh9m\" (UniqueName: \"kubernetes.io/projected/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-kube-api-access-gxh9m\") pod \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " Mar 17 18:54:33.710463 kubelet[2719]: I0317 18:54:33.707410 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-host-proc-sys-net\") pod \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " Mar 17 18:54:33.710463 kubelet[2719]: I0317 18:54:33.707439 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-xtables-lock\") pod \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " Mar 17 18:54:33.710463 kubelet[2719]: I0317 18:54:33.707473 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cni-path\") pod \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " Mar 17 18:54:33.710463 kubelet[2719]: I0317 18:54:33.707492 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cilium-cgroup\") pod \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " Mar 17 18:54:33.710463 kubelet[2719]: I0317 18:54:33.707509 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-host-proc-sys-kernel\") pod \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " Mar 17 18:54:33.710713 kubelet[2719]: I0317 18:54:33.707538 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-lib-modules\") pod \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " Mar 17 18:54:33.710713 kubelet[2719]: I0317 18:54:33.707556 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cilium-run\") pod \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " Mar 17 18:54:33.710713 kubelet[2719]: I0317 18:54:33.707581 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-hostproc\") pod \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " Mar 17 18:54:33.710713 kubelet[2719]: I0317 18:54:33.707596 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-bpf-maps\") pod \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " Mar 17 18:54:33.710713 kubelet[2719]: I0317 18:54:33.707700 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-etc-cni-netd\") pod \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " Mar 17 18:54:33.710713 kubelet[2719]: I0317 18:54:33.707722 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-hubble-tls\") pod \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " Mar 17 18:54:33.710937 kubelet[2719]: I0317 18:54:33.707812 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5e8e4307-8e0f-4b7f-973b-62dcacccdee1" (UID: "5e8e4307-8e0f-4b7f-973b-62dcacccdee1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:33.710937 kubelet[2719]: I0317 18:54:33.707867 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5e8e4307-8e0f-4b7f-973b-62dcacccdee1" (UID: "5e8e4307-8e0f-4b7f-973b-62dcacccdee1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:33.710937 kubelet[2719]: I0317 18:54:33.707886 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5e8e4307-8e0f-4b7f-973b-62dcacccdee1" (UID: "5e8e4307-8e0f-4b7f-973b-62dcacccdee1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:33.710937 kubelet[2719]: I0317 18:54:33.707901 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cni-path" (OuterVolumeSpecName: "cni-path") pod "5e8e4307-8e0f-4b7f-973b-62dcacccdee1" (UID: "5e8e4307-8e0f-4b7f-973b-62dcacccdee1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:33.710937 kubelet[2719]: I0317 18:54:33.707914 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5e8e4307-8e0f-4b7f-973b-62dcacccdee1" (UID: "5e8e4307-8e0f-4b7f-973b-62dcacccdee1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:33.711171 kubelet[2719]: I0317 18:54:33.707930 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-hostproc" (OuterVolumeSpecName: "hostproc") pod "5e8e4307-8e0f-4b7f-973b-62dcacccdee1" (UID: "5e8e4307-8e0f-4b7f-973b-62dcacccdee1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:33.711171 kubelet[2719]: I0317 18:54:33.707944 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5e8e4307-8e0f-4b7f-973b-62dcacccdee1" (UID: "5e8e4307-8e0f-4b7f-973b-62dcacccdee1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:33.711171 kubelet[2719]: I0317 18:54:33.707956 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5e8e4307-8e0f-4b7f-973b-62dcacccdee1" (UID: "5e8e4307-8e0f-4b7f-973b-62dcacccdee1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:33.711171 kubelet[2719]: I0317 18:54:33.707969 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5e8e4307-8e0f-4b7f-973b-62dcacccdee1" (UID: "5e8e4307-8e0f-4b7f-973b-62dcacccdee1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:33.711171 kubelet[2719]: I0317 18:54:33.707985 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5e8e4307-8e0f-4b7f-973b-62dcacccdee1" (UID: "5e8e4307-8e0f-4b7f-973b-62dcacccdee1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:54:33.714486 kubelet[2719]: I0317 18:54:33.712995 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5e8e4307-8e0f-4b7f-973b-62dcacccdee1" (UID: "5e8e4307-8e0f-4b7f-973b-62dcacccdee1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:54:33.715412 systemd[1]: var-lib-kubelet-pods-5e8e4307\x2d8e0f\x2d4b7f\x2d973b\x2d62dcacccdee1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:54:33.718181 systemd[1]: var-lib-kubelet-pods-5e8e4307\x2d8e0f\x2d4b7f\x2d973b\x2d62dcacccdee1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgxh9m.mount: Deactivated successfully. Mar 17 18:54:33.719183 kubelet[2719]: I0317 18:54:33.719007 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-kube-api-access-gxh9m" (OuterVolumeSpecName: "kube-api-access-gxh9m") pod "5e8e4307-8e0f-4b7f-973b-62dcacccdee1" (UID: "5e8e4307-8e0f-4b7f-973b-62dcacccdee1"). InnerVolumeSpecName "kube-api-access-gxh9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:54:33.808919 kubelet[2719]: I0317 18:54:33.808889 2719 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cilium-run\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:33.809101 kubelet[2719]: I0317 18:54:33.809090 2719 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-hostproc\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:33.809198 kubelet[2719]: I0317 18:54:33.809176 2719 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-lib-modules\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:33.809283 kubelet[2719]: I0317 18:54:33.809273 2719 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-bpf-maps\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:33.809364 kubelet[2719]: I0317 18:54:33.809355 2719 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-etc-cni-netd\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:33.809423 kubelet[2719]: I0317 18:54:33.809414 2719 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-hubble-tls\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:33.809484 kubelet[2719]: I0317 18:54:33.809466 2719 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-gxh9m\" (UniqueName: \"kubernetes.io/projected/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-kube-api-access-gxh9m\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:33.809541 kubelet[2719]: I0317 18:54:33.809531 2719 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-host-proc-sys-net\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:33.809612 kubelet[2719]: I0317 18:54:33.809602 2719 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-xtables-lock\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:33.809672 kubelet[2719]: I0317 18:54:33.809663 2719 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cilium-cgroup\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:33.809732 kubelet[2719]: I0317 18:54:33.809716 2719 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-host-proc-sys-kernel\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:33.809797 kubelet[2719]: I0317 18:54:33.809787 2719 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cni-path\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:33.877729 sshd[4502]: Accepted publickey for core from 10.200.16.10 port 56050 ssh2: RSA SHA256:paJy8VmUDtRyOvFhLDJavsN2rbrMSHSIk56mCEIjqlY Mar 17 18:54:33.879069 sshd[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:54:33.883651 systemd[1]: Started session-27.scope. Mar 17 18:54:33.883897 systemd-logind[1542]: New session 27 of user core. Mar 17 18:54:34.112360 kubelet[2719]: I0317 18:54:34.112321 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cilium-ipsec-secrets\") pod \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " Mar 17 18:54:34.112360 kubelet[2719]: I0317 18:54:34.112364 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cilium-config-path\") pod \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " Mar 17 18:54:34.112551 kubelet[2719]: I0317 18:54:34.112389 2719 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-clustermesh-secrets\") pod \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\" (UID: \"5e8e4307-8e0f-4b7f-973b-62dcacccdee1\") " Mar 17 18:54:34.114668 kubelet[2719]: I0317 18:54:34.114627 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5e8e4307-8e0f-4b7f-973b-62dcacccdee1" (UID: "5e8e4307-8e0f-4b7f-973b-62dcacccdee1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:54:34.117741 systemd[1]: var-lib-kubelet-pods-5e8e4307\x2d8e0f\x2d4b7f\x2d973b\x2d62dcacccdee1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:54:34.119961 kubelet[2719]: I0317 18:54:34.119911 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "5e8e4307-8e0f-4b7f-973b-62dcacccdee1" (UID: "5e8e4307-8e0f-4b7f-973b-62dcacccdee1"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:54:34.120098 kubelet[2719]: I0317 18:54:34.119803 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5e8e4307-8e0f-4b7f-973b-62dcacccdee1" (UID: "5e8e4307-8e0f-4b7f-973b-62dcacccdee1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:54:34.213575 kubelet[2719]: I0317 18:54:34.213535 2719 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-clustermesh-secrets\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:34.213575 kubelet[2719]: I0317 18:54:34.213570 2719 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cilium-ipsec-secrets\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:34.213575 kubelet[2719]: I0317 18:54:34.213580 2719 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e8e4307-8e0f-4b7f-973b-62dcacccdee1-cilium-config-path\") on node \"ci-3510.3.7-a-2597755324\" DevicePath \"\"" Mar 17 18:54:34.417305 kubelet[2719]: E0317 18:54:34.417201 2719 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:54:34.715164 systemd[1]: var-lib-kubelet-pods-5e8e4307\x2d8e0f\x2d4b7f\x2d973b\x2d62dcacccdee1-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:54:34.730299 kubelet[2719]: I0317 18:54:34.730253 2719 topology_manager.go:215] "Topology Admit Handler" podUID="6849b9ef-8eb4-4613-8c8b-de95e8f7fd74" podNamespace="kube-system" podName="cilium-jqlxz" Mar 17 18:54:34.816457 kubelet[2719]: I0317 18:54:34.816419 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6849b9ef-8eb4-4613-8c8b-de95e8f7fd74-cilium-ipsec-secrets\") pod \"cilium-jqlxz\" (UID: \"6849b9ef-8eb4-4613-8c8b-de95e8f7fd74\") " pod="kube-system/cilium-jqlxz" Mar 17 18:54:34.816673 kubelet[2719]: I0317 18:54:34.816657 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6849b9ef-8eb4-4613-8c8b-de95e8f7fd74-etc-cni-netd\") pod \"cilium-jqlxz\" (UID: \"6849b9ef-8eb4-4613-8c8b-de95e8f7fd74\") " pod="kube-system/cilium-jqlxz" Mar 17 18:54:34.816771 kubelet[2719]: I0317 18:54:34.816758 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6849b9ef-8eb4-4613-8c8b-de95e8f7fd74-cilium-config-path\") pod \"cilium-jqlxz\" (UID: \"6849b9ef-8eb4-4613-8c8b-de95e8f7fd74\") " pod="kube-system/cilium-jqlxz" Mar 17 18:54:34.816910 kubelet[2719]: I0317 18:54:34.816869 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6849b9ef-8eb4-4613-8c8b-de95e8f7fd74-hubble-tls\") pod \"cilium-jqlxz\" (UID: \"6849b9ef-8eb4-4613-8c8b-de95e8f7fd74\") " pod="kube-system/cilium-jqlxz" Mar 17 18:54:34.817020 kubelet[2719]: I0317 18:54:34.817008 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6849b9ef-8eb4-4613-8c8b-de95e8f7fd74-cilium-cgroup\") pod \"cilium-jqlxz\" (UID: \"6849b9ef-8eb4-4613-8c8b-de95e8f7fd74\") " pod="kube-system/cilium-jqlxz" Mar 17 18:54:34.817119 kubelet[2719]: I0317 18:54:34.817107 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6849b9ef-8eb4-4613-8c8b-de95e8f7fd74-hostproc\") pod \"cilium-jqlxz\" (UID: \"6849b9ef-8eb4-4613-8c8b-de95e8f7fd74\") " pod="kube-system/cilium-jqlxz" Mar 17 18:54:34.817212 kubelet[2719]: I0317 18:54:34.817200 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6849b9ef-8eb4-4613-8c8b-de95e8f7fd74-host-proc-sys-net\") pod \"cilium-jqlxz\" (UID: \"6849b9ef-8eb4-4613-8c8b-de95e8f7fd74\") " pod="kube-system/cilium-jqlxz" Mar 17 18:54:34.817326 kubelet[2719]: I0317 18:54:34.817312 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6849b9ef-8eb4-4613-8c8b-de95e8f7fd74-host-proc-sys-kernel\") pod \"cilium-jqlxz\" (UID: \"6849b9ef-8eb4-4613-8c8b-de95e8f7fd74\") " pod="kube-system/cilium-jqlxz" Mar 17 18:54:34.817411 kubelet[2719]: I0317 18:54:34.817399 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6849b9ef-8eb4-4613-8c8b-de95e8f7fd74-cilium-run\") pod \"cilium-jqlxz\" (UID: \"6849b9ef-8eb4-4613-8c8b-de95e8f7fd74\") " pod="kube-system/cilium-jqlxz" Mar 17 18:54:34.817498 kubelet[2719]: I0317 18:54:34.817487 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6849b9ef-8eb4-4613-8c8b-de95e8f7fd74-bpf-maps\") pod \"cilium-jqlxz\" (UID: \"6849b9ef-8eb4-4613-8c8b-de95e8f7fd74\") " pod="kube-system/cilium-jqlxz" Mar 17 18:54:34.817581 kubelet[2719]: I0317 18:54:34.817570 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6849b9ef-8eb4-4613-8c8b-de95e8f7fd74-xtables-lock\") pod \"cilium-jqlxz\" (UID: \"6849b9ef-8eb4-4613-8c8b-de95e8f7fd74\") " pod="kube-system/cilium-jqlxz" Mar 17 18:54:34.817662 kubelet[2719]: I0317 18:54:34.817651 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6849b9ef-8eb4-4613-8c8b-de95e8f7fd74-clustermesh-secrets\") pod \"cilium-jqlxz\" (UID: \"6849b9ef-8eb4-4613-8c8b-de95e8f7fd74\") " pod="kube-system/cilium-jqlxz" Mar 17 18:54:34.817745 kubelet[2719]: I0317 18:54:34.817732 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6849b9ef-8eb4-4613-8c8b-de95e8f7fd74-cni-path\") pod \"cilium-jqlxz\" (UID: \"6849b9ef-8eb4-4613-8c8b-de95e8f7fd74\") " pod="kube-system/cilium-jqlxz" Mar 17 18:54:34.817845 kubelet[2719]: I0317 18:54:34.817833 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6849b9ef-8eb4-4613-8c8b-de95e8f7fd74-lib-modules\") pod \"cilium-jqlxz\" (UID: \"6849b9ef-8eb4-4613-8c8b-de95e8f7fd74\") " pod="kube-system/cilium-jqlxz" Mar 17 18:54:34.817943 kubelet[2719]: I0317 18:54:34.817930 2719 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cwlk\" (UniqueName: \"kubernetes.io/projected/6849b9ef-8eb4-4613-8c8b-de95e8f7fd74-kube-api-access-6cwlk\") pod \"cilium-jqlxz\" (UID: \"6849b9ef-8eb4-4613-8c8b-de95e8f7fd74\") " pod="kube-system/cilium-jqlxz" Mar 17 18:54:35.034530 env[1558]: time="2025-03-17T18:54:35.034191995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jqlxz,Uid:6849b9ef-8eb4-4613-8c8b-de95e8f7fd74,Namespace:kube-system,Attempt:0,}" Mar 17 18:54:35.086133 env[1558]: time="2025-03-17T18:54:35.086060310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:54:35.086297 env[1558]: time="2025-03-17T18:54:35.086144469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:54:35.086297 env[1558]: time="2025-03-17T18:54:35.086170829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:54:35.086525 env[1558]: time="2025-03-17T18:54:35.086442746Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5658f44e83982ce1bae6648416a42cf775e3eedfd61a953843d07a78d32ca492 pid=4529 runtime=io.containerd.runc.v2 Mar 17 18:54:35.125358 env[1558]: time="2025-03-17T18:54:35.125309283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jqlxz,Uid:6849b9ef-8eb4-4613-8c8b-de95e8f7fd74,Namespace:kube-system,Attempt:0,} returns sandbox id \"5658f44e83982ce1bae6648416a42cf775e3eedfd61a953843d07a78d32ca492\"" Mar 17 18:54:35.129514 env[1558]: time="2025-03-17T18:54:35.129467637Z" level=info msg="CreateContainer within sandbox \"5658f44e83982ce1bae6648416a42cf775e3eedfd61a953843d07a78d32ca492\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:54:35.188171 env[1558]: time="2025-03-17T18:54:35.188110479Z" level=info msg="CreateContainer within sandbox \"5658f44e83982ce1bae6648416a42cf775e3eedfd61a953843d07a78d32ca492\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dfe90c5da64c27716e68876797580fd8f96dd20b8d760a0a180ee83c1e440f54\"" Mar 17 18:54:35.189978 env[1558]: time="2025-03-17T18:54:35.188921910Z" level=info msg="StartContainer for \"dfe90c5da64c27716e68876797580fd8f96dd20b8d760a0a180ee83c1e440f54\"" Mar 17 18:54:35.196310 kubelet[2719]: I0317 18:54:35.196257 2719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e8e4307-8e0f-4b7f-973b-62dcacccdee1" path="/var/lib/kubelet/pods/5e8e4307-8e0f-4b7f-973b-62dcacccdee1/volumes" Mar 17 18:54:35.251738 env[1558]: time="2025-03-17T18:54:35.251698186Z" level=info msg="StartContainer for \"dfe90c5da64c27716e68876797580fd8f96dd20b8d760a0a180ee83c1e440f54\" returns successfully" Mar 17 18:54:35.358390 env[1558]: time="2025-03-17T18:54:35.358006789Z" level=info msg="shim disconnected" id=dfe90c5da64c27716e68876797580fd8f96dd20b8d760a0a180ee83c1e440f54 Mar 17 18:54:35.358624 env[1558]: time="2025-03-17T18:54:35.358602102Z" level=warning msg="cleaning up after shim disconnected" id=dfe90c5da64c27716e68876797580fd8f96dd20b8d760a0a180ee83c1e440f54 namespace=k8s.io Mar 17 18:54:35.358689 env[1558]: time="2025-03-17T18:54:35.358674741Z" level=info msg="cleaning up dead shim" Mar 17 18:54:35.369750 env[1558]: time="2025-03-17T18:54:35.369706381Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4611 runtime=io.containerd.runc.v2\n" Mar 17 18:54:35.664641 kubelet[2719]: I0317 18:54:35.664378 2719 setters.go:580] "Node became not ready" node="ci-3510.3.7-a-2597755324" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:54:35Z","lastTransitionTime":"2025-03-17T18:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:54:35.690655 env[1558]: time="2025-03-17T18:54:35.690612326Z" level=info msg="CreateContainer within sandbox \"5658f44e83982ce1bae6648416a42cf775e3eedfd61a953843d07a78d32ca492\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:54:35.739162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4163802712.mount: Deactivated successfully. Mar 17 18:54:35.748120 env[1558]: time="2025-03-17T18:54:35.748058581Z" level=info msg="CreateContainer within sandbox \"5658f44e83982ce1bae6648416a42cf775e3eedfd61a953843d07a78d32ca492\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"036fe947425a69ff8a758fd2333b5a39f1f69ab7bfbbadbf08358b9d578a48d2\"" Mar 17 18:54:35.749761 env[1558]: time="2025-03-17T18:54:35.749722483Z" level=info msg="StartContainer for \"036fe947425a69ff8a758fd2333b5a39f1f69ab7bfbbadbf08358b9d578a48d2\"" Mar 17 18:54:35.798617 env[1558]: time="2025-03-17T18:54:35.798578791Z" level=info msg="StartContainer for \"036fe947425a69ff8a758fd2333b5a39f1f69ab7bfbbadbf08358b9d578a48d2\" returns successfully" Mar 17 18:54:35.833454 env[1558]: time="2025-03-17T18:54:35.833409251Z" level=info msg="shim disconnected" id=036fe947425a69ff8a758fd2333b5a39f1f69ab7bfbbadbf08358b9d578a48d2 Mar 17 18:54:35.833701 env[1558]: time="2025-03-17T18:54:35.833683848Z" level=warning msg="cleaning up after shim disconnected" id=036fe947425a69ff8a758fd2333b5a39f1f69ab7bfbbadbf08358b9d578a48d2 namespace=k8s.io Mar 17 18:54:35.833763 env[1558]: time="2025-03-17T18:54:35.833751048Z" level=info msg="cleaning up dead shim" Mar 17 18:54:35.841091 env[1558]: time="2025-03-17T18:54:35.841050808Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4671 runtime=io.containerd.runc.v2\n" Mar 17 18:54:36.695027 env[1558]: time="2025-03-17T18:54:36.694935612Z" level=info msg="CreateContainer within sandbox \"5658f44e83982ce1bae6648416a42cf775e3eedfd61a953843d07a78d32ca492\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:54:36.715145 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-036fe947425a69ff8a758fd2333b5a39f1f69ab7bfbbadbf08358b9d578a48d2-rootfs.mount: Deactivated successfully. Mar 17 18:54:36.737372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3445317986.mount: Deactivated successfully. Mar 17 18:54:36.744551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1127680619.mount: Deactivated successfully. Mar 17 18:54:36.757873 env[1558]: time="2025-03-17T18:54:36.757771362Z" level=info msg="CreateContainer within sandbox \"5658f44e83982ce1bae6648416a42cf775e3eedfd61a953843d07a78d32ca492\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b1207da6908d98f8f9dae8d6a21d7997e97a36322858e438f5f2729b6413ed3d\"" Mar 17 18:54:36.759519 env[1558]: time="2025-03-17T18:54:36.758649953Z" level=info msg="StartContainer for \"b1207da6908d98f8f9dae8d6a21d7997e97a36322858e438f5f2729b6413ed3d\"" Mar 17 18:54:36.808943 env[1558]: time="2025-03-17T18:54:36.808888561Z" level=info msg="StartContainer for \"b1207da6908d98f8f9dae8d6a21d7997e97a36322858e438f5f2729b6413ed3d\" returns successfully" Mar 17 18:54:36.848700 env[1558]: time="2025-03-17T18:54:36.848639525Z" level=info msg="shim disconnected" id=b1207da6908d98f8f9dae8d6a21d7997e97a36322858e438f5f2729b6413ed3d Mar 17 18:54:36.848700 env[1558]: time="2025-03-17T18:54:36.848692924Z" level=warning msg="cleaning up after shim disconnected" id=b1207da6908d98f8f9dae8d6a21d7997e97a36322858e438f5f2729b6413ed3d namespace=k8s.io Mar 17 18:54:36.848700 env[1558]: time="2025-03-17T18:54:36.848705084Z" level=info msg="cleaning up dead shim" Mar 17 18:54:36.856000 env[1558]: time="2025-03-17T18:54:36.855954125Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4728 runtime=io.containerd.runc.v2\n" Mar 17 18:54:37.705854 env[1558]: time="2025-03-17T18:54:37.703144972Z" level=info msg="CreateContainer within sandbox \"5658f44e83982ce1bae6648416a42cf775e3eedfd61a953843d07a78d32ca492\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:54:37.735251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2810869547.mount: Deactivated successfully. Mar 17 18:54:37.743697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4260815688.mount: Deactivated successfully. Mar 17 18:54:37.761922 env[1558]: time="2025-03-17T18:54:37.761871522Z" level=info msg="CreateContainer within sandbox \"5658f44e83982ce1bae6648416a42cf775e3eedfd61a953843d07a78d32ca492\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"36c0f2dfd7e1ccd7fb1b4e53bc3aad2d1ec560a54fdd72a9c5d11db4010f5834\"" Mar 17 18:54:37.763991 env[1558]: time="2025-03-17T18:54:37.763960739Z" level=info msg="StartContainer for \"36c0f2dfd7e1ccd7fb1b4e53bc3aad2d1ec560a54fdd72a9c5d11db4010f5834\"" Mar 17 18:54:37.808612 env[1558]: time="2025-03-17T18:54:37.808569766Z" level=info msg="StartContainer for \"36c0f2dfd7e1ccd7fb1b4e53bc3aad2d1ec560a54fdd72a9c5d11db4010f5834\" returns successfully" Mar 17 18:54:37.837730 env[1558]: time="2025-03-17T18:54:37.837680964Z" level=info msg="shim disconnected" id=36c0f2dfd7e1ccd7fb1b4e53bc3aad2d1ec560a54fdd72a9c5d11db4010f5834 Mar 17 18:54:37.837730 env[1558]: time="2025-03-17T18:54:37.837728004Z" level=warning msg="cleaning up after shim disconnected" id=36c0f2dfd7e1ccd7fb1b4e53bc3aad2d1ec560a54fdd72a9c5d11db4010f5834 namespace=k8s.io Mar 17 18:54:37.837730 env[1558]: time="2025-03-17T18:54:37.837737324Z" level=info msg="cleaning up dead shim" Mar 17 18:54:37.844681 env[1558]: time="2025-03-17T18:54:37.844641727Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:54:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4787 runtime=io.containerd.runc.v2\n" Mar 17 18:54:38.702578 env[1558]: time="2025-03-17T18:54:38.702136792Z" level=info msg="CreateContainer within sandbox \"5658f44e83982ce1bae6648416a42cf775e3eedfd61a953843d07a78d32ca492\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:54:38.734306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3115662476.mount: Deactivated successfully. Mar 17 18:54:38.751689 env[1558]: time="2025-03-17T18:54:38.751647761Z" level=info msg="CreateContainer within sandbox \"5658f44e83982ce1bae6648416a42cf775e3eedfd61a953843d07a78d32ca492\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"58b653b739b9c1e9514e4349a5eaca589abdede5dcbdb159fcbbe987cd385c02\"" Mar 17 18:54:38.753767 env[1558]: time="2025-03-17T18:54:38.753736258Z" level=info msg="StartContainer for \"58b653b739b9c1e9514e4349a5eaca589abdede5dcbdb159fcbbe987cd385c02\"" Mar 17 18:54:38.809333 env[1558]: time="2025-03-17T18:54:38.809252120Z" level=info msg="StartContainer for \"58b653b739b9c1e9514e4349a5eaca589abdede5dcbdb159fcbbe987cd385c02\" returns successfully" Mar 17 18:54:39.252851 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Mar 17 18:54:39.735177 kubelet[2719]: I0317 18:54:39.734144 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jqlxz" podStartSLOduration=5.734129844 podStartE2EDuration="5.734129844s" podCreationTimestamp="2025-03-17 18:54:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:54:39.733734649 +0000 UTC m=+260.679429233" watchObservedRunningTime="2025-03-17 18:54:39.734129844 +0000 UTC m=+260.679824428" Mar 17 18:54:40.322222 systemd[1]: run-containerd-runc-k8s.io-58b653b739b9c1e9514e4349a5eaca589abdede5dcbdb159fcbbe987cd385c02-runc.JAICv7.mount: Deactivated successfully. Mar 17 18:54:41.865847 systemd-networkd[1729]: lxc_health: Link UP Mar 17 18:54:41.890542 systemd-networkd[1729]: lxc_health: Gained carrier Mar 17 18:54:41.890849 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:54:42.454533 systemd[1]: run-containerd-runc-k8s.io-58b653b739b9c1e9514e4349a5eaca589abdede5dcbdb159fcbbe987cd385c02-runc.H23dl5.mount: Deactivated successfully. Mar 17 18:54:43.734004 systemd-networkd[1729]: lxc_health: Gained IPv6LL Mar 17 18:54:44.631673 systemd[1]: run-containerd-runc-k8s.io-58b653b739b9c1e9514e4349a5eaca589abdede5dcbdb159fcbbe987cd385c02-runc.19LTUq.mount: Deactivated successfully. Mar 17 18:54:46.766616 systemd[1]: run-containerd-runc-k8s.io-58b653b739b9c1e9514e4349a5eaca589abdede5dcbdb159fcbbe987cd385c02-runc.eYny8s.mount: Deactivated successfully. Mar 17 18:54:48.939746 systemd[1]: run-containerd-runc-k8s.io-58b653b739b9c1e9514e4349a5eaca589abdede5dcbdb159fcbbe987cd385c02-runc.sEYMR1.mount: Deactivated successfully. Mar 17 18:54:49.075936 sshd[4502]: pam_unix(sshd:session): session closed for user core Mar 17 18:54:49.078718 systemd[1]: sshd@24-10.200.20.37:22-10.200.16.10:56050.service: Deactivated successfully. Mar 17 18:54:49.080141 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 18:54:49.080708 systemd-logind[1542]: Session 27 logged out. Waiting for processes to exit. Mar 17 18:54:49.081561 systemd-logind[1542]: Removed session 27.